Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/138370 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-21362 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-kHXEHnVWmJsO/agent.2161 SSH_AGENT_PID=2163 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp@tmp/private_key_16135314201208585084.key (/w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp@tmp/private_key_16135314201208585084.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/70/138370/2 # timeout=30 > git rev-parse 98c473b1b99348ea19603eb6a0c4932cc295274d^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script Checking out Revision 98c473b1b99348ea19603eb6a0c4932cc295274d (refs/changes/70/138370/2) > git config core.sparsecheckout # timeout=10 > git checkout -f 98c473b1b99348ea19603eb6a0c4932cc295274d # timeout=30 Commit message: "Fix helm plugin install failure" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk 54d234de0d9260f610425cd496a52265a4082441 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins7806589310899969617.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-D5kL lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-D5kL/bin to PATH Generating Requirements File Python 3.10.6 pip 24.1.1 from /tmp/venv-D5kL/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.4.0 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.139 botocore==1.34.139 bs4==0.0.2 cachetools==5.3.3 certifi==2024.7.4 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.7.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.3 email_validator==2.2.0 filelock==3.15.4 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.31.0 httplib2==0.22.0 identify==2.5.36 idna==3.7 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.4 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.22.0 jsonschema-specifications==2023.12.1 keystoneauth1==5.6.0 kubernetes==30.1.0 lftools==0.37.10 lxml==5.2.2 MarkupSafe==2.1.5 msgpack==1.0.8 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 netifaces==0.11.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==3.2.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.1.0 oslo.config==9.5.0 oslo.context==5.5.0 oslo.i18n==6.3.0 oslo.log==6.1.0 oslo.serialization==5.4.0 oslo.utils==7.2.0 packaging==24.1 pbr==6.0.0 platformdirs==4.2.2 prettytable==3.10.0 pyasn1==0.6.0 pyasn1_modules==0.4.0 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.3.0 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.5.0 python-dateutil==2.9.0.post0 python-heatclient==3.5.0 python-jenkins==1.8.2 python-keystoneclient==5.4.0 python-magnumclient==4.5.0 python-novaclient==18.6.0 python-openstackclient==6.6.0 python-swiftclient==4.6.0 PyYAML==6.0.1 referencing==0.35.1 requests==2.32.3 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.18.1 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.2 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.2.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.5 tqdm==4.66.4 typing_extensions==4.12.2 tzdata==2024.1 urllib3==1.26.19 virtualenv==20.26.3 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/sh /tmp/jenkins5704485268417652712.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/sh -xe /tmp/jenkins10795185852797236614.sh + /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp/csit/run-project-csit.sh drools-pdp WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 88 60.0M 88 53.0M 0 0 138M 0 --:--:-- --:--:-- --:--:-- 138M 100 60.0M 100 60.0M 0 0 147M 0 --:--:-- --:--:-- --:--:-- 268M Setting project configuration for: drools-pdp Configuring docker compose... Starting drools-pdp application time="2024-07-04T13:21:44Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." mariadb Pulling drools-pdp Pulling kafka Pulling policy-db-migrator Pulling api Pulling zookeeper Pulling pap Pulling 31e352740f53 Pulling fs layer 257d54e26411 Pulling fs layer 215302b53935 Pulling fs layer eb2f448c7730 Pulling fs layer c8ee90c58894 Pulling fs layer e30cdb86c4f0 Pulling fs layer eb2f448c7730 Waiting c8ee90c58894 Waiting c990b7e46fc8 Pulling fs layer c990b7e46fc8 Waiting e30cdb86c4f0 Waiting 31e352740f53 Pulling fs layer 21c7cf7066d0 Pulling fs layer c3cc5e3d19ac Pulling fs layer 0d2280d71230 Pulling fs layer 984932e12fb0 Pulling fs layer 21c7cf7066d0 Waiting c3cc5e3d19ac Waiting 5687ac571232 Pulling fs layer deac262509a5 Pulling fs layer 0d2280d71230 Waiting 5687ac571232 Waiting 984932e12fb0 Waiting 31e352740f53 Pulling fs layer 57703e441b07 Pulling fs layer b60adf254296 Pulling fs layer b29b000e6d6f Pulling fs layer 6795d6ff6b94 Pulling fs layer 2dbf51d3b724 Pulling fs layer 5f2ad66465ce Pulling fs layer 6b5d5ea98aa3 Pulling fs layer e9d854b3197e Pulling fs layer 54b70430af9a Pulling fs layer 20a0a091aeb6 Pulling fs layer 4c3134cef26d Pulling fs layer 2dbf51d3b724 Waiting 5f2ad66465ce Waiting 54b70430af9a Waiting b60adf254296 Waiting e9d854b3197e Waiting 20a0a091aeb6 Waiting 6b5d5ea98aa3 Waiting 4c3134cef26d Waiting 31e352740f53 Downloading [> ] 48.06kB/3.398MB 31e352740f53 Downloading [> ] 48.06kB/3.398MB 31e352740f53 Downloading [> ] 48.06kB/3.398MB 10ac4908093d Pulling fs layer 44779101e748 Pulling fs layer a721db3e3f3d Pulling fs layer 1850a929b84a Pulling fs layer 397a918c7da3 Pulling fs layer 806be17e856d Pulling fs layer 634de6c90876 Pulling fs layer cd00854cfb1a Pulling fs layer 10ac4908093d Waiting 1850a929b84a Waiting a721db3e3f3d Waiting 806be17e856d Waiting 44779101e748 Waiting cd00854cfb1a Waiting 215302b53935 Downloading [==================================================>] 293B/293B 215302b53935 Verifying Checksum 215302b53935 Download complete 257d54e26411 Downloading [> ] 539.6kB/73.93MB eb2f448c7730 Downloading [=> ] 3.001kB/127kB eb2f448c7730 Downloading [==================================================>] 127kB/127kB eb2f448c7730 Verifying Checksum eb2f448c7730 Download complete c8ee90c58894 Downloading [==================================================>] 1.329kB/1.329kB c8ee90c58894 Verifying Checksum c8ee90c58894 Download complete 31e352740f53 Download complete 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB c990b7e46fc8 Downloading [==================================================>] 1.299kB/1.299kB c990b7e46fc8 Verifying Checksum c990b7e46fc8 Download complete e30cdb86c4f0 Downloading [> ] 539.6kB/98.32MB 21c7cf7066d0 Downloading [> ] 539.6kB/73.93MB 257d54e26411 Downloading [=====> ] 8.65MB/73.93MB 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB e30cdb86c4f0 Downloading [===> ] 5.946MB/98.32MB 21c7cf7066d0 Downloading [=====> ] 8.109MB/73.93MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 257d54e26411 Downloading [============> ] 18.92MB/73.93MB 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete e30cdb86c4f0 Downloading [========> ] 16.22MB/98.32MB 22ebf0e44c85 Pulling fs layer 00b33c871d26 Pulling fs layer 6b11e56702ad Pulling fs layer 22ebf0e44c85 Waiting 53d69aa7d3fc Pulling fs layer a3ab11953ef9 Pulling fs layer 6b11e56702ad Waiting 91ef9543149d Pulling fs layer 2ec4f59af178 Pulling fs layer 53d69aa7d3fc Waiting 8b7e81cd5ef1 Pulling fs layer c52916c1316e Pulling fs layer d93f69e96600 Pulling fs layer bbb9d15c45a1 Pulling fs layer 2ec4f59af178 Waiting 8b7e81cd5ef1 Waiting bbb9d15c45a1 Waiting 22ebf0e44c85 Pulling fs layer 00b33c871d26 Pulling fs layer 6b11e56702ad Pulling fs layer 53d69aa7d3fc Pulling fs layer a3ab11953ef9 Pulling fs layer 91ef9543149d Pulling fs layer 2ec4f59af178 Pulling fs layer 22ebf0e44c85 Waiting 8b7e81cd5ef1 Pulling fs layer c52916c1316e Pulling fs layer 7a1cb9ad7f75 Pulling fs layer 0a92c7dea7af Pulling fs layer 00b33c871d26 Waiting 6b11e56702ad Waiting 53d69aa7d3fc Waiting a3ab11953ef9 Waiting 8b7e81cd5ef1 Waiting c52916c1316e Waiting 91ef9543149d Waiting 7a1cb9ad7f75 Waiting 2ec4f59af178 Waiting 0a92c7dea7af Waiting 21c7cf7066d0 Downloading [=============> ] 20MB/73.93MB 257d54e26411 Downloading [======================> ] 32.98MB/73.93MB e30cdb86c4f0 Downloading [==============> ] 29.2MB/98.32MB 21c7cf7066d0 Downloading [=====================> ] 32.44MB/73.93MB 257d54e26411 Downloading [================================> ] 47.58MB/73.93MB e30cdb86c4f0 Downloading [===================> ] 38.93MB/98.32MB 21c7cf7066d0 Downloading [============================> ] 42.71MB/73.93MB 257d54e26411 Downloading [=======================================> ] 57.85MB/73.93MB e30cdb86c4f0 Downloading [=========================> ] 49.74MB/98.32MB 21c7cf7066d0 Downloading [===================================> ] 51.9MB/73.93MB 257d54e26411 Downloading [==============================================> ] 68.66MB/73.93MB 257d54e26411 Verifying Checksum 257d54e26411 Download complete e30cdb86c4f0 Downloading [==============================> ] 59.47MB/98.32MB c3cc5e3d19ac Download complete 21c7cf7066d0 Downloading [==========================================> ] 63.26MB/73.93MB 31e352740f53 Already exists e8bf24a82546 Pulling fs layer 154b803e2d93 Pulling fs layer e4305231c991 Pulling fs layer f469048fbe8d Pulling fs layer 54aeda60d8bb Pulling fs layer b96ca003f2ab Pulling fs layer 6c22ee15992d Pulling fs layer fc6d5acc9ab8 Pulling fs layer e4305231c991 Waiting f469048fbe8d Waiting 54aeda60d8bb Waiting b96ca003f2ab Waiting 6c22ee15992d Waiting e8bf24a82546 Waiting fc6d5acc9ab8 Waiting 154b803e2d93 Waiting 0d2280d71230 Downloading [=> ] 3.001kB/127.4kB 0d2280d71230 Downloading [==================================================>] 127.4kB/127.4kB 0d2280d71230 Verifying Checksum 0d2280d71230 Download complete 984932e12fb0 Downloading [==================================================>] 1.147kB/1.147kB 984932e12fb0 Verifying Checksum 984932e12fb0 Download complete 257d54e26411 Extracting [> ] 557.1kB/73.93MB 5687ac571232 Downloading [> ] 539.6kB/91.54MB 21c7cf7066d0 Verifying Checksum 21c7cf7066d0 Download complete e30cdb86c4f0 Downloading [====================================> ] 71.37MB/98.32MB deac262509a5 Download complete 57703e441b07 Downloading [> ] 539.6kB/73.93MB 5687ac571232 Downloading [====> ] 8.109MB/91.54MB 257d54e26411 Extracting [===> ] 4.456MB/73.93MB e30cdb86c4f0 Downloading [============================================> ] 87.59MB/98.32MB 21c7cf7066d0 Extracting [> ] 557.1kB/73.93MB 57703e441b07 Downloading [===> ] 5.406MB/73.93MB e30cdb86c4f0 Verifying Checksum e30cdb86c4f0 Download complete 5687ac571232 Downloading [=========> ] 17.3MB/91.54MB 257d54e26411 Extracting [======> ] 10.03MB/73.93MB b60adf254296 Downloading [> ] 343kB/32.98MB 21c7cf7066d0 Extracting [===> ] 4.456MB/73.93MB 57703e441b07 Downloading [=========> ] 13.52MB/73.93MB 5687ac571232 Downloading [=================> ] 32.44MB/91.54MB 257d54e26411 Extracting [=========> ] 13.93MB/73.93MB b60adf254296 Downloading [===========> ] 7.568MB/32.98MB 21c7cf7066d0 Extracting [======> ] 8.913MB/73.93MB 57703e441b07 Downloading [==================> ] 27.57MB/73.93MB 5687ac571232 Downloading [==========================> ] 48.12MB/91.54MB b60adf254296 Downloading [============================> ] 18.92MB/32.98MB 257d54e26411 Extracting [=============> ] 20.05MB/73.93MB 21c7cf7066d0 Extracting [=========> ] 13.93MB/73.93MB 57703e441b07 Downloading [=============================> ] 43.25MB/73.93MB 5687ac571232 Downloading [=================================> ] 62.18MB/91.54MB b60adf254296 Downloading [=================================================> ] 32.68MB/32.98MB b60adf254296 Verifying Checksum b60adf254296 Download complete 257d54e26411 Extracting [=================> ] 25.62MB/73.93MB 21c7cf7066d0 Extracting [=============> ] 19.5MB/73.93MB b29b000e6d6f Downloading [==================================================>] 1.076kB/1.076kB b29b000e6d6f Download complete 57703e441b07 Downloading [========================================> ] 59.47MB/73.93MB 6795d6ff6b94 Downloading [============================> ] 3.003kB/5.322kB 6795d6ff6b94 Downloading [==================================================>] 5.322kB/5.322kB 6795d6ff6b94 Verifying Checksum 6795d6ff6b94 Download complete 2dbf51d3b724 Downloading [============================> ] 3.003kB/5.313kB 2dbf51d3b724 Downloading [==================================================>] 5.313kB/5.313kB 2dbf51d3b724 Verifying Checksum 2dbf51d3b724 Download complete 5687ac571232 Downloading [=========================================> ] 75.69MB/91.54MB 5f2ad66465ce Downloading [==================================================>] 1.036kB/1.036kB 5f2ad66465ce Verifying Checksum 5f2ad66465ce Download complete 257d54e26411 Extracting [====================> ] 30.08MB/73.93MB 6b5d5ea98aa3 Downloading [==================================================>] 1.038kB/1.038kB 6b5d5ea98aa3 Verifying Checksum 6b5d5ea98aa3 Download complete 21c7cf7066d0 Extracting [=================> ] 25.62MB/73.93MB 57703e441b07 Verifying Checksum 57703e441b07 Download complete e9d854b3197e Downloading [==========> ] 3.002kB/13.9kB e9d854b3197e Download complete 54b70430af9a Downloading [==========> ] 3.002kB/13.79kB 54b70430af9a Download complete 20a0a091aeb6 Download complete 4c3134cef26d Downloading [==================================================>] 2.862kB/2.862kB 4c3134cef26d Verifying Checksum 4c3134cef26d Download complete 10ac4908093d Downloading [> ] 310.2kB/30.43MB 44779101e748 Downloading [==================================================>] 1.744kB/1.744kB 44779101e748 Verifying Checksum 44779101e748 Download complete 5687ac571232 Downloading [=================================================> ] 90.83MB/91.54MB 5687ac571232 Verifying Checksum 5687ac571232 Download complete a721db3e3f3d Downloading [> ] 64.45kB/5.526MB 1850a929b84a Download complete 257d54e26411 Extracting [=======================> ] 35.09MB/73.93MB 397a918c7da3 Downloading [==================================================>] 327B/327B 397a918c7da3 Verifying Checksum 397a918c7da3 Download complete 21c7cf7066d0 Extracting [=====================> ] 31.2MB/73.93MB 806be17e856d Downloading [> ] 539.6kB/89.72MB 10ac4908093d Downloading [===========> ] 6.847MB/30.43MB 57703e441b07 Extracting [> ] 557.1kB/73.93MB a721db3e3f3d Verifying Checksum a721db3e3f3d Download complete 634de6c90876 Downloading [===========================================> ] 3.011kB/3.49kB 634de6c90876 Download complete 257d54e26411 Extracting [==========================> ] 39.55MB/73.93MB cd00854cfb1a Downloading [=====================> ] 3.011kB/6.971kB cd00854cfb1a Downloading [==================================================>] 6.971kB/6.971kB cd00854cfb1a Verifying Checksum cd00854cfb1a Download complete 21c7cf7066d0 Extracting [========================> ] 36.77MB/73.93MB 806be17e856d Downloading [====> ] 8.109MB/89.72MB 10ac4908093d Downloading [============================> ] 17.43MB/30.43MB 57703e441b07 Extracting [===> ] 4.456MB/73.93MB 257d54e26411 Extracting [=============================> ] 42.89MB/73.93MB 21c7cf7066d0 Extracting [===========================> ] 40.67MB/73.93MB 10ac4908093d Verifying Checksum 10ac4908093d Download complete 806be17e856d Downloading [===========> ] 20MB/89.72MB 22ebf0e44c85 Downloading [> ] 376.8kB/37.02MB 22ebf0e44c85 Downloading [> ] 376.8kB/37.02MB 57703e441b07 Extracting [=====> ] 7.799MB/73.93MB 257d54e26411 Extracting [================================> ] 47.35MB/73.93MB 21c7cf7066d0 Extracting [=============================> ] 44.01MB/73.93MB 806be17e856d Downloading [===================> ] 34.6MB/89.72MB 10ac4908093d Extracting [> ] 327.7kB/30.43MB 22ebf0e44c85 Downloading [=================> ] 13.23MB/37.02MB 22ebf0e44c85 Downloading [=================> ] 13.23MB/37.02MB 57703e441b07 Extracting [=======> ] 10.58MB/73.93MB 257d54e26411 Extracting [==================================> ] 50.69MB/73.93MB 00b33c871d26 Downloading [> ] 538.9kB/253.3MB 00b33c871d26 Downloading [> ] 538.9kB/253.3MB 806be17e856d Downloading [=========================> ] 44.87MB/89.72MB 21c7cf7066d0 Extracting [================================> ] 48.46MB/73.93MB 10ac4908093d Extracting [======> ] 4.26MB/30.43MB 22ebf0e44c85 Downloading [===================================> ] 26.06MB/37.02MB 22ebf0e44c85 Downloading [===================================> ] 26.06MB/37.02MB 57703e441b07 Extracting [=========> ] 13.37MB/73.93MB 00b33c871d26 Downloading [==> ] 12.31MB/253.3MB 00b33c871d26 Downloading [==> ] 12.31MB/253.3MB 257d54e26411 Extracting [====================================> ] 54.03MB/73.93MB 22ebf0e44c85 Downloading [==================================================>] 37.02MB/37.02MB 22ebf0e44c85 Downloading [==================================================>] 37.02MB/37.02MB 22ebf0e44c85 Verifying Checksum 22ebf0e44c85 Download complete 22ebf0e44c85 Verifying Checksum 22ebf0e44c85 Download complete 806be17e856d Downloading [===============================> ] 55.69MB/89.72MB 21c7cf7066d0 Extracting [===================================> ] 51.81MB/73.93MB 10ac4908093d Extracting [============> ] 7.864MB/30.43MB 57703e441b07 Extracting [===========> ] 17.27MB/73.93MB 00b33c871d26 Downloading [====> ] 25.19MB/253.3MB 00b33c871d26 Downloading [====> ] 25.19MB/253.3MB 257d54e26411 Extracting [======================================> ] 57.38MB/73.93MB 806be17e856d Downloading [=====================================> ] 68.12MB/89.72MB 10ac4908093d Extracting [=================> ] 10.49MB/30.43MB 22ebf0e44c85 Extracting [> ] 393.2kB/37.02MB 22ebf0e44c85 Extracting [> ] 393.2kB/37.02MB 21c7cf7066d0 Extracting [====================================> ] 54.03MB/73.93MB 6b11e56702ad Downloading [> ] 78.22kB/7.707MB 6b11e56702ad Downloading [> ] 78.22kB/7.707MB 00b33c871d26 Downloading [=======> ] 38.59MB/253.3MB 00b33c871d26 Downloading [=======> ] 38.59MB/253.3MB 57703e441b07 Extracting [==============> ] 21.73MB/73.93MB 806be17e856d Downloading [==============================================> ] 83.26MB/89.72MB 6b11e56702ad Downloading [======================> ] 3.525MB/7.707MB 6b11e56702ad Downloading [======================> ] 3.525MB/7.707MB 257d54e26411 Extracting [=========================================> ] 60.72MB/73.93MB 10ac4908093d Extracting [====================> ] 12.78MB/30.43MB 22ebf0e44c85 Extracting [==> ] 1.573MB/37.02MB 22ebf0e44c85 Extracting [==> ] 1.573MB/37.02MB 21c7cf7066d0 Extracting [======================================> ] 56.82MB/73.93MB 00b33c871d26 Downloading [=========> ] 48.81MB/253.3MB 00b33c871d26 Downloading [=========> ] 48.81MB/253.3MB 6b11e56702ad Verifying Checksum 6b11e56702ad Download complete 6b11e56702ad Verifying Checksum 6b11e56702ad Download complete 57703e441b07 Extracting [================> ] 24.51MB/73.93MB 806be17e856d Downloading [================================================> ] 87.59MB/89.72MB 806be17e856d Verifying Checksum 806be17e856d Download complete 10ac4908093d Extracting [===========================> ] 17.04MB/30.43MB 22ebf0e44c85 Extracting [======> ] 4.719MB/37.02MB 22ebf0e44c85 Extracting [======> ] 4.719MB/37.02MB 257d54e26411 Extracting [===========================================> ] 64.62MB/73.93MB 21c7cf7066d0 Extracting [========================================> ] 59.6MB/73.93MB 00b33c871d26 Downloading [==========> ] 54.73MB/253.3MB 00b33c871d26 Downloading [==========> ] 54.73MB/253.3MB 57703e441b07 Extracting [==================> ] 27.85MB/73.93MB 53d69aa7d3fc Downloading [=> ] 721B/19.96kB 53d69aa7d3fc Downloading [=> ] 721B/19.96kB 10ac4908093d Extracting [===================================> ] 21.63MB/30.43MB 53d69aa7d3fc Verifying Checksum 53d69aa7d3fc Verifying Checksum 53d69aa7d3fc Download complete 53d69aa7d3fc Download complete 22ebf0e44c85 Extracting [==========> ] 7.471MB/37.02MB 22ebf0e44c85 Extracting [==========> ] 7.471MB/37.02MB 21c7cf7066d0 Extracting [==========================================> ] 62.95MB/73.93MB 257d54e26411 Extracting [=============================================> ] 67.96MB/73.93MB 00b33c871d26 Downloading [=============> ] 67.61MB/253.3MB 00b33c871d26 Downloading [=============> ] 67.61MB/253.3MB 57703e441b07 Extracting [=====================> ] 31.75MB/73.93MB a3ab11953ef9 Downloading [> ] 409.6kB/39.52MB a3ab11953ef9 Downloading [> ] 409.6kB/39.52MB 10ac4908093d Extracting [========================================> ] 24.58MB/30.43MB 22ebf0e44c85 Extracting [==============> ] 11.01MB/37.02MB 22ebf0e44c85 Extracting [==============> ] 11.01MB/37.02MB 21c7cf7066d0 Extracting [=============================================> ] 67.4MB/73.93MB 00b33c871d26 Downloading [===============> ] 79.43MB/253.3MB 00b33c871d26 Downloading [===============> ] 79.43MB/253.3MB 257d54e26411 Extracting [================================================> ] 71.86MB/73.93MB 91ef9543149d Downloading [================================> ] 719B/1.101kB 91ef9543149d Downloading [================================> ] 719B/1.101kB 91ef9543149d Downloading [==================================================>] 1.101kB/1.101kB 91ef9543149d Downloading [==================================================>] 1.101kB/1.101kB 91ef9543149d Verifying Checksum 91ef9543149d Download complete 91ef9543149d Verifying Checksum 91ef9543149d Download complete a3ab11953ef9 Downloading [=============> ] 10.58MB/39.52MB a3ab11953ef9 Downloading [=============> ] 10.58MB/39.52MB 57703e441b07 Extracting [========================> ] 35.65MB/73.93MB 22ebf0e44c85 Extracting [===================> ] 14.16MB/37.02MB 22ebf0e44c85 Extracting [===================> ] 14.16MB/37.02MB 21c7cf7066d0 Extracting [================================================> ] 71.3MB/73.93MB 00b33c871d26 Downloading [==================> ] 92.84MB/253.3MB 00b33c871d26 Downloading [==================> ] 92.84MB/253.3MB 257d54e26411 Extracting [==================================================>] 73.93MB/73.93MB 10ac4908093d Extracting [===========================================> ] 26.54MB/30.43MB a3ab11953ef9 Downloading [===========================> ] 21.97MB/39.52MB a3ab11953ef9 Downloading [===========================> ] 21.97MB/39.52MB 57703e441b07 Extracting [=========================> ] 38.44MB/73.93MB 00b33c871d26 Downloading [=====================> ] 107.3MB/253.3MB 00b33c871d26 Downloading [=====================> ] 107.3MB/253.3MB 2ec4f59af178 Downloading [========================================> ] 721B/881B 2ec4f59af178 Downloading [==================================================>] 881B/881B 2ec4f59af178 Downloading [========================================> ] 721B/881B 2ec4f59af178 Downloading [==================================================>] 881B/881B 2ec4f59af178 Verifying Checksum 2ec4f59af178 Verifying Checksum 2ec4f59af178 Download complete 2ec4f59af178 Download complete a3ab11953ef9 Downloading [==============================================> ] 36.6MB/39.52MB a3ab11953ef9 Downloading [==============================================> ] 36.6MB/39.52MB 10ac4908093d Extracting [==============================================> ] 28.18MB/30.43MB 21c7cf7066d0 Extracting [=================================================> ] 73.53MB/73.93MB 22ebf0e44c85 Extracting [=======================> ] 17.69MB/37.02MB 22ebf0e44c85 Extracting [=======================> ] 17.69MB/37.02MB a3ab11953ef9 Verifying Checksum a3ab11953ef9 Download complete a3ab11953ef9 Download complete 57703e441b07 Extracting [=============================> ] 42.89MB/73.93MB 257d54e26411 Pull complete 215302b53935 Extracting [==================================================>] 293B/293B 215302b53935 Extracting [==================================================>] 293B/293B 21c7cf7066d0 Extracting [==================================================>] 73.93MB/73.93MB 00b33c871d26 Downloading [=======================> ] 119.6MB/253.3MB 00b33c871d26 Downloading [=======================> ] 119.6MB/253.3MB 22ebf0e44c85 Extracting [============================> ] 20.84MB/37.02MB 22ebf0e44c85 Extracting [============================> ] 20.84MB/37.02MB 8b7e81cd5ef1 Downloading [==================================================>] 131B/131B 8b7e81cd5ef1 Downloading [==================================================>] 131B/131B c52916c1316e Downloading [==================================================>] 171B/171B c52916c1316e Downloading [==================================================>] 171B/171B 8b7e81cd5ef1 Verifying Checksum 8b7e81cd5ef1 Download complete 8b7e81cd5ef1 Verifying Checksum 8b7e81cd5ef1 Download complete c52916c1316e Verifying Checksum c52916c1316e Verifying Checksum c52916c1316e Download complete c52916c1316e Download complete 57703e441b07 Extracting [===============================> ] 46.24MB/73.93MB 21c7cf7066d0 Pull complete c3cc5e3d19ac Extracting [==================================================>] 296B/296B c3cc5e3d19ac Extracting [==================================================>] 296B/296B 10ac4908093d Extracting [=================================================> ] 30.15MB/30.43MB 00b33c871d26 Downloading [==========================> ] 134.2MB/253.3MB 00b33c871d26 Downloading [==========================> ] 134.2MB/253.3MB 215302b53935 Pull complete eb2f448c7730 Extracting [============> ] 32.77kB/127kB eb2f448c7730 Extracting [==================================================>] 127kB/127kB eb2f448c7730 Extracting [==================================================>] 127kB/127kB 10ac4908093d Extracting [==================================================>] 30.43MB/30.43MB 22ebf0e44c85 Extracting [================================> ] 24.38MB/37.02MB 22ebf0e44c85 Extracting [================================> ] 24.38MB/37.02MB 57703e441b07 Extracting [=================================> ] 49.02MB/73.93MB 00b33c871d26 Downloading [=============================> ] 151.9MB/253.3MB 00b33c871d26 Downloading [=============================> ] 151.9MB/253.3MB d93f69e96600 Downloading [> ] 535.8kB/115.2MB 22ebf0e44c85 Extracting [===================================> ] 26.35MB/37.02MB 22ebf0e44c85 Extracting [===================================> ] 26.35MB/37.02MB c3cc5e3d19ac Pull complete 10ac4908093d Pull complete 0d2280d71230 Extracting [============> ] 32.77kB/127.4kB 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 0d2280d71230 Extracting [==================================================>] 127.4kB/127.4kB 57703e441b07 Extracting [====================================> ] 53.48MB/73.93MB bbb9d15c45a1 Downloading [=========> ] 719B/3.633kB bbb9d15c45a1 Downloading [==================================================>] 3.633kB/3.633kB bbb9d15c45a1 Verifying Checksum bbb9d15c45a1 Download complete 00b33c871d26 Downloading [================================> ] 166.3MB/253.3MB 00b33c871d26 Downloading [================================> ] 166.3MB/253.3MB d93f69e96600 Downloading [====> ] 11.23MB/115.2MB eb2f448c7730 Pull complete 22ebf0e44c85 Extracting [=======================================> ] 29.1MB/37.02MB 22ebf0e44c85 Extracting [=======================================> ] 29.1MB/37.02MB c8ee90c58894 Extracting [==================================================>] 1.329kB/1.329kB c8ee90c58894 Extracting [==================================================>] 1.329kB/1.329kB 57703e441b07 Extracting [=======================================> ] 57.93MB/73.93MB 00b33c871d26 Downloading [===================================> ] 178.6MB/253.3MB 00b33c871d26 Downloading [===================================> ] 178.6MB/253.3MB d93f69e96600 Downloading [===========> ] 26.82MB/115.2MB 22ebf0e44c85 Extracting [===========================================> ] 32.24MB/37.02MB 22ebf0e44c85 Extracting [===========================================> ] 32.24MB/37.02MB 0d2280d71230 Pull complete 44779101e748 Pull complete 984932e12fb0 Extracting [==================================================>] 1.147kB/1.147kB 984932e12fb0 Extracting [==================================================>] 1.147kB/1.147kB a721db3e3f3d Extracting [> ] 65.54kB/5.526MB 00b33c871d26 Downloading [=====================================> ] 191MB/253.3MB 00b33c871d26 Downloading [=====================================> ] 191MB/253.3MB 57703e441b07 Extracting [=========================================> ] 60.72MB/73.93MB d93f69e96600 Downloading [===============> ] 36.47MB/115.2MB 7a1cb9ad7f75 Downloading [> ] 535.8kB/115.2MB 00b33c871d26 Downloading [=======================================> ] 201.2MB/253.3MB 00b33c871d26 Downloading [=======================================> ] 201.2MB/253.3MB 22ebf0e44c85 Extracting [==============================================> ] 34.21MB/37.02MB 22ebf0e44c85 Extracting [==============================================> ] 34.21MB/37.02MB 57703e441b07 Extracting [===========================================> ] 64.06MB/73.93MB a721db3e3f3d Extracting [==> ] 262.1kB/5.526MB d93f69e96600 Downloading [===================> ] 43.96MB/115.2MB 7a1cb9ad7f75 Downloading [=> ] 3.746MB/115.2MB 57703e441b07 Extracting [============================================> ] 66.29MB/73.93MB a721db3e3f3d Extracting [=======================> ] 2.556MB/5.526MB 00b33c871d26 Downloading [=========================================> ] 209.8MB/253.3MB 00b33c871d26 Downloading [=========================================> ] 209.8MB/253.3MB 22ebf0e44c85 Extracting [================================================> ] 35.78MB/37.02MB 22ebf0e44c85 Extracting [================================================> ] 35.78MB/37.02MB d93f69e96600 Downloading [=========================> ] 58.95MB/115.2MB 22ebf0e44c85 Extracting [==================================================>] 37.02MB/37.02MB 22ebf0e44c85 Extracting [==================================================>] 37.02MB/37.02MB 00b33c871d26 Downloading [===========================================> ] 218.4MB/253.3MB 00b33c871d26 Downloading [===========================================> ] 218.4MB/253.3MB 7a1cb9ad7f75 Downloading [====> ] 9.628MB/115.2MB 57703e441b07 Extracting [===============================================> ] 69.63MB/73.93MB a721db3e3f3d Extracting [=========================================> ] 4.588MB/5.526MB d93f69e96600 Downloading [===========================> ] 64.32MB/115.2MB 00b33c871d26 Downloading [=============================================> ] 229.1MB/253.3MB 00b33c871d26 Downloading [=============================================> ] 229.1MB/253.3MB 7a1cb9ad7f75 Downloading [=====> ] 11.77MB/115.2MB a721db3e3f3d Extracting [==========================================> ] 4.719MB/5.526MB 57703e441b07 Extracting [================================================> ] 71.3MB/73.93MB d93f69e96600 Downloading [=================================> ] 78.31MB/115.2MB 00b33c871d26 Downloading [===============================================> ] 238.3MB/253.3MB 00b33c871d26 Downloading [===============================================> ] 238.3MB/253.3MB 7a1cb9ad7f75 Downloading [=========> ] 22.53MB/115.2MB a721db3e3f3d Extracting [==================================================>] 5.526MB/5.526MB 00b33c871d26 Downloading [================================================> ] 246.9MB/253.3MB 00b33c871d26 Downloading [================================================> ] 246.9MB/253.3MB 7a1cb9ad7f75 Downloading [============> ] 28.42MB/115.2MB d93f69e96600 Downloading [======================================> ] 88.49MB/115.2MB c8ee90c58894 Pull complete 00b33c871d26 Verifying Checksum 00b33c871d26 Download complete 00b33c871d26 Verifying Checksum 00b33c871d26 Download complete 57703e441b07 Extracting [==================================================>] 73.93MB/73.93MB 7a1cb9ad7f75 Downloading [================> ] 39.15MB/115.2MB d93f69e96600 Downloading [==========================================> ] 98.66MB/115.2MB 0a92c7dea7af Downloading [==========> ] 720B/3.449kB 0a92c7dea7af Downloading [==================================================>] 3.449kB/3.449kB 0a92c7dea7af Verifying Checksum 0a92c7dea7af Download complete 7a1cb9ad7f75 Downloading [=======================> ] 54.7MB/115.2MB d93f69e96600 Downloading [===============================================> ] 110.5MB/115.2MB d93f69e96600 Verifying Checksum d93f69e96600 Download complete 154b803e2d93 Downloading [=> ] 3.002kB/84.13kB e8bf24a82546 Downloading [> ] 539.6kB/180.3MB 154b803e2d93 Download complete e4305231c991 Downloading [==================================================>] 92B/92B e4305231c991 Verifying Checksum e4305231c991 Download complete 7a1cb9ad7f75 Downloading [===========================> ] 63.26MB/115.2MB f469048fbe8d Downloading [==================================================>] 92B/92B f469048fbe8d Verifying Checksum f469048fbe8d Download complete e8bf24a82546 Downloading [=> ] 4.324MB/180.3MB 7a1cb9ad7f75 Downloading [=============================> ] 68.63MB/115.2MB 54aeda60d8bb Downloading [> ] 506.8kB/50.21MB e8bf24a82546 Downloading [==> ] 10.81MB/180.3MB 984932e12fb0 Pull complete 22ebf0e44c85 Pull complete 22ebf0e44c85 Pull complete 7a1cb9ad7f75 Downloading [===================================> ] 81.49MB/115.2MB 54aeda60d8bb Downloading [==> ] 2.031MB/50.21MB a721db3e3f3d Pull complete 57703e441b07 Pull complete e30cdb86c4f0 Extracting [> ] 557.1kB/98.32MB 1850a929b84a Extracting [==================================================>] 149B/149B e8bf24a82546 Downloading [===> ] 13.52MB/180.3MB 1850a929b84a Extracting [==================================================>] 149B/149B 7a1cb9ad7f75 Downloading [=========================================> ] 94.95MB/115.2MB 54aeda60d8bb Downloading [========> ] 8.125MB/50.21MB e30cdb86c4f0 Extracting [====> ] 8.913MB/98.32MB e8bf24a82546 Downloading [======> ] 24.87MB/180.3MB 00b33c871d26 Extracting [> ] 557.1kB/253.3MB 00b33c871d26 Extracting [> ] 557.1kB/253.3MB 5687ac571232 Extracting [> ] 557.1kB/91.54MB b60adf254296 Extracting [> ] 360.4kB/32.98MB 7a1cb9ad7f75 Downloading [============================================> ] 103.5MB/115.2MB 1850a929b84a Pull complete 54aeda60d8bb Downloading [==================> ] 18.28MB/50.21MB 397a918c7da3 Extracting [==================================================>] 327B/327B 397a918c7da3 Extracting [==================================================>] 327B/327B e30cdb86c4f0 Extracting [======> ] 12.81MB/98.32MB e8bf24a82546 Downloading [=========> ] 33.52MB/180.3MB 00b33c871d26 Extracting [=> ] 10.03MB/253.3MB 00b33c871d26 Extracting [=> ] 10.03MB/253.3MB 5687ac571232 Extracting [===> ] 6.685MB/91.54MB b60adf254296 Extracting [===> ] 2.523MB/32.98MB 7a1cb9ad7f75 Downloading [=================================================> ] 114.3MB/115.2MB 7a1cb9ad7f75 Verifying Checksum 7a1cb9ad7f75 Download complete 54aeda60d8bb Downloading [============================> ] 28.95MB/50.21MB e30cdb86c4f0 Extracting [=========> ] 18.94MB/98.32MB b96ca003f2ab Downloading [==================================================>] 372B/372B b96ca003f2ab Verifying Checksum b96ca003f2ab Download complete e8bf24a82546 Downloading [============> ] 45.42MB/180.3MB 00b33c871d26 Extracting [====> ] 20.61MB/253.3MB 00b33c871d26 Extracting [====> ] 20.61MB/253.3MB b60adf254296 Extracting [=======> ] 5.046MB/32.98MB 5687ac571232 Extracting [========> ] 15.04MB/91.54MB 397a918c7da3 Pull complete 6c22ee15992d Downloading [> ] 539.6kB/96.48MB 54aeda60d8bb Downloading [===================================> ] 35.55MB/50.21MB e30cdb86c4f0 Extracting [============> ] 23.95MB/98.32MB e8bf24a82546 Downloading [==============> ] 54.07MB/180.3MB 00b33c871d26 Extracting [=====> ] 25.62MB/253.3MB 00b33c871d26 Extracting [=====> ] 25.62MB/253.3MB b60adf254296 Extracting [==========> ] 7.209MB/32.98MB 806be17e856d Extracting [> ] 557.1kB/89.72MB 5687ac571232 Extracting [============> ] 22.28MB/91.54MB 6c22ee15992d Downloading [===> ] 5.946MB/96.48MB 54aeda60d8bb Downloading [===============================================> ] 47.74MB/50.21MB e30cdb86c4f0 Extracting [================> ] 31.75MB/98.32MB 54aeda60d8bb Verifying Checksum 54aeda60d8bb Download complete e8bf24a82546 Downloading [==================> ] 66.5MB/180.3MB b60adf254296 Extracting [================> ] 11.17MB/32.98MB fc6d5acc9ab8 Downloading [> ] 539.6kB/97.42MB 00b33c871d26 Extracting [=====> ] 27.85MB/253.3MB 00b33c871d26 Extracting [=====> ] 27.85MB/253.3MB 5687ac571232 Extracting [===============> ] 28.41MB/91.54MB 806be17e856d Extracting [=> ] 3.342MB/89.72MB 6c22ee15992d Downloading [=======> ] 14.06MB/96.48MB e30cdb86c4f0 Extracting [===================> ] 38.99MB/98.32MB e8bf24a82546 Downloading [=====================> ] 77.86MB/180.3MB fc6d5acc9ab8 Downloading [===> ] 5.946MB/97.42MB 00b33c871d26 Extracting [======> ] 32.87MB/253.3MB 00b33c871d26 Extracting [======> ] 32.87MB/253.3MB b60adf254296 Extracting [======================> ] 14.78MB/32.98MB 5687ac571232 Extracting [===================> ] 35.65MB/91.54MB 806be17e856d Extracting [==> ] 5.014MB/89.72MB 6c22ee15992d Downloading [==========> ] 21.09MB/96.48MB e30cdb86c4f0 Extracting [=========================> ] 49.58MB/98.32MB e8bf24a82546 Downloading [=========================> ] 90.29MB/180.3MB 00b33c871d26 Extracting [========> ] 43.45MB/253.3MB 00b33c871d26 Extracting [========> ] 43.45MB/253.3MB fc6d5acc9ab8 Downloading [========> ] 16.76MB/97.42MB 5687ac571232 Extracting [=======================> ] 42.89MB/91.54MB b60adf254296 Extracting [==========================> ] 17.66MB/32.98MB 806be17e856d Extracting [====> ] 7.799MB/89.72MB 6c22ee15992d Downloading [===============> ] 29.2MB/96.48MB e30cdb86c4f0 Extracting [============================> ] 56.26MB/98.32MB e8bf24a82546 Downloading [============================> ] 104.3MB/180.3MB fc6d5acc9ab8 Downloading [============> ] 24.33MB/97.42MB 00b33c871d26 Extracting [==========> ] 52.92MB/253.3MB 00b33c871d26 Extracting [==========> ] 52.92MB/253.3MB 5687ac571232 Extracting [===========================> ] 51.25MB/91.54MB b60adf254296 Extracting [==============================> ] 19.82MB/32.98MB 6c22ee15992d Downloading [===================> ] 36.76MB/96.48MB 806be17e856d Extracting [======> ] 11.14MB/89.72MB e8bf24a82546 Downloading [================================> ] 116.2MB/180.3MB e30cdb86c4f0 Extracting [================================> ] 62.95MB/98.32MB fc6d5acc9ab8 Downloading [====================> ] 40.01MB/97.42MB 00b33c871d26 Extracting [===========> ] 59.05MB/253.3MB 00b33c871d26 Extracting [===========> ] 59.05MB/253.3MB 5687ac571232 Extracting [================================> ] 60.16MB/91.54MB b60adf254296 Extracting [================================> ] 21.63MB/32.98MB 806be17e856d Extracting [=======> ] 13.93MB/89.72MB 6c22ee15992d Downloading [=========================> ] 48.66MB/96.48MB e8bf24a82546 Downloading [====================================> ] 130.8MB/180.3MB e30cdb86c4f0 Extracting [====================================> ] 71.86MB/98.32MB fc6d5acc9ab8 Downloading [=========================> ] 50.28MB/97.42MB 00b33c871d26 Extracting [=============> ] 67.4MB/253.3MB 00b33c871d26 Extracting [=============> ] 67.4MB/253.3MB 5687ac571232 Extracting [======================================> ] 70.75MB/91.54MB b60adf254296 Extracting [===================================> ] 23.43MB/32.98MB 6c22ee15992d Downloading [===============================> ] 61.64MB/96.48MB e30cdb86c4f0 Extracting [=========================================> ] 80.77MB/98.32MB e8bf24a82546 Downloading [=======================================> ] 143.8MB/180.3MB 806be17e856d Extracting [=========> ] 17.83MB/89.72MB fc6d5acc9ab8 Downloading [=================================> ] 65.42MB/97.42MB 00b33c871d26 Extracting [==============> ] 75.2MB/253.3MB 00b33c871d26 Extracting [==============> ] 75.2MB/253.3MB 5687ac571232 Extracting [===========================================> ] 79.1MB/91.54MB 6c22ee15992d Downloading [====================================> ] 71.37MB/96.48MB e30cdb86c4f0 Extracting [==============================================> ] 91.36MB/98.32MB e8bf24a82546 Downloading [===========================================> ] 158.4MB/180.3MB b60adf254296 Extracting [====================================> ] 24.15MB/32.98MB 806be17e856d Extracting [===========> ] 20.61MB/89.72MB fc6d5acc9ab8 Downloading [==========================================> ] 82.18MB/97.42MB 5687ac571232 Extracting [===============================================> ] 86.9MB/91.54MB 00b33c871d26 Extracting [================> ] 83.56MB/253.3MB 00b33c871d26 Extracting [================> ] 83.56MB/253.3MB 6c22ee15992d Downloading [===========================================> ] 84.34MB/96.48MB e30cdb86c4f0 Extracting [=================================================> ] 97.48MB/98.32MB 5687ac571232 Extracting [==================================================>] 91.54MB/91.54MB e8bf24a82546 Downloading [==============================================> ] 169.2MB/180.3MB b60adf254296 Extracting [========================================> ] 26.67MB/32.98MB e30cdb86c4f0 Extracting [==================================================>] 98.32MB/98.32MB 806be17e856d Extracting [=============> ] 23.4MB/89.72MB fc6d5acc9ab8 Verifying Checksum fc6d5acc9ab8 Download complete 6c22ee15992d Downloading [==============================================> ] 90.29MB/96.48MB 00b33c871d26 Extracting [=================> ] 88.01MB/253.3MB 00b33c871d26 Extracting [=================> ] 88.01MB/253.3MB e8bf24a82546 Downloading [================================================> ] 174.1MB/180.3MB 5687ac571232 Pull complete e30cdb86c4f0 Pull complete c990b7e46fc8 Extracting [==================================================>] 1.299kB/1.299kB deac262509a5 Extracting [==================================================>] 1.118kB/1.118kB deac262509a5 Extracting [==================================================>] 1.118kB/1.118kB c990b7e46fc8 Extracting [==================================================>] 1.299kB/1.299kB 806be17e856d Extracting [=============> ] 24.51MB/89.72MB 6c22ee15992d Verifying Checksum 6c22ee15992d Download complete e8bf24a82546 Verifying Checksum e8bf24a82546 Download complete 00b33c871d26 Extracting [==================> ] 94.14MB/253.3MB 00b33c871d26 Extracting [==================> ] 94.14MB/253.3MB b60adf254296 Extracting [=========================================> ] 27.39MB/32.98MB 806be17e856d Extracting [==============> ] 26.18MB/89.72MB c990b7e46fc8 Pull complete 00b33c871d26 Extracting [===================> ] 100.3MB/253.3MB 00b33c871d26 Extracting [===================> ] 100.3MB/253.3MB pap Pulled e8bf24a82546 Extracting [> ] 557.1kB/180.3MB deac262509a5 Pull complete b60adf254296 Extracting [=============================================> ] 29.92MB/32.98MB api Pulled 806be17e856d Extracting [===============> ] 28.41MB/89.72MB 00b33c871d26 Extracting [=====================> ] 106.4MB/253.3MB 00b33c871d26 Extracting [=====================> ] 106.4MB/253.3MB e8bf24a82546 Extracting [=> ] 3.899MB/180.3MB b60adf254296 Extracting [===============================================> ] 31.36MB/32.98MB b60adf254296 Extracting [==================================================>] 32.98MB/32.98MB 806be17e856d Extracting [=================> ] 31.2MB/89.72MB 00b33c871d26 Extracting [======================> ] 112.5MB/253.3MB 00b33c871d26 Extracting [======================> ] 112.5MB/253.3MB e8bf24a82546 Extracting [===> ] 12.26MB/180.3MB b60adf254296 Pull complete b29b000e6d6f Extracting [==================================================>] 1.076kB/1.076kB b29b000e6d6f Extracting [==================================================>] 1.076kB/1.076kB 806be17e856d Extracting [==================> ] 33.42MB/89.72MB 00b33c871d26 Extracting [======================> ] 114.8MB/253.3MB 00b33c871d26 Extracting [======================> ] 114.8MB/253.3MB e8bf24a82546 Extracting [======> ] 21.73MB/180.3MB b29b000e6d6f Pull complete 6795d6ff6b94 Extracting [==================================================>] 5.322kB/5.322kB 6795d6ff6b94 Extracting [==================================================>] 5.322kB/5.322kB 806be17e856d Extracting [====================> ] 36.21MB/89.72MB 00b33c871d26 Extracting [=======================> ] 119.2MB/253.3MB 00b33c871d26 Extracting [=======================> ] 119.2MB/253.3MB e8bf24a82546 Extracting [=========> ] 32.87MB/180.3MB 6795d6ff6b94 Pull complete 806be17e856d Extracting [======================> ] 39.55MB/89.72MB 2dbf51d3b724 Extracting [==================================================>] 5.313kB/5.313kB 2dbf51d3b724 Extracting [==================================================>] 5.313kB/5.313kB 00b33c871d26 Extracting [========================> ] 124.2MB/253.3MB 00b33c871d26 Extracting [========================> ] 124.2MB/253.3MB e8bf24a82546 Extracting [============> ] 43.45MB/180.3MB 00b33c871d26 Extracting [=========================> ] 129.2MB/253.3MB 00b33c871d26 Extracting [=========================> ] 129.2MB/253.3MB e8bf24a82546 Extracting [==============> ] 52.92MB/180.3MB 806be17e856d Extracting [=======================> ] 42.89MB/89.72MB 2dbf51d3b724 Pull complete 5f2ad66465ce Extracting [==================================================>] 1.036kB/1.036kB 5f2ad66465ce Extracting [==================================================>] 1.036kB/1.036kB 00b33c871d26 Extracting [==========================> ] 133.7MB/253.3MB 00b33c871d26 Extracting [==========================> ] 133.7MB/253.3MB e8bf24a82546 Extracting [=================> ] 62.95MB/180.3MB 806be17e856d Extracting [=========================> ] 45.68MB/89.72MB 5f2ad66465ce Pull complete 6b5d5ea98aa3 Extracting [==================================================>] 1.038kB/1.038kB 6b5d5ea98aa3 Extracting [==================================================>] 1.038kB/1.038kB e8bf24a82546 Extracting [====================> ] 75.2MB/180.3MB 00b33c871d26 Extracting [===========================> ] 137.6MB/253.3MB 00b33c871d26 Extracting [===========================> ] 137.6MB/253.3MB 806be17e856d Extracting [===========================> ] 49.58MB/89.72MB e8bf24a82546 Extracting [=======================> ] 83.56MB/180.3MB 6b5d5ea98aa3 Pull complete 00b33c871d26 Extracting [===========================> ] 140.9MB/253.3MB 00b33c871d26 Extracting [===========================> ] 140.9MB/253.3MB 806be17e856d Extracting [==============================> ] 55.15MB/89.72MB e9d854b3197e Extracting [==================================================>] 13.9kB/13.9kB e9d854b3197e Extracting [==================================================>] 13.9kB/13.9kB 806be17e856d Extracting [================================> ] 57.93MB/89.72MB e8bf24a82546 Extracting [=========================> ] 90.24MB/180.3MB 00b33c871d26 Extracting [============================> ] 144.3MB/253.3MB 00b33c871d26 Extracting [============================> ] 144.3MB/253.3MB e9d854b3197e Pull complete 54b70430af9a Extracting [==================================================>] 13.79kB/13.79kB 54b70430af9a Extracting [==================================================>] 13.79kB/13.79kB 806be17e856d Extracting [==================================> ] 61.28MB/89.72MB 00b33c871d26 Extracting [=============================> ] 148.7MB/253.3MB 00b33c871d26 Extracting [=============================> ] 148.7MB/253.3MB e8bf24a82546 Extracting [=========================> ] 93.59MB/180.3MB e8bf24a82546 Extracting [===========================> ] 98.6MB/180.3MB 00b33c871d26 Extracting [==============================> ] 152.1MB/253.3MB 00b33c871d26 Extracting [==============================> ] 152.1MB/253.3MB 806be17e856d Extracting [=====================================> ] 66.85MB/89.72MB 54b70430af9a Pull complete 20a0a091aeb6 Extracting [==================================================>] 2.856kB/2.856kB 20a0a091aeb6 Extracting [==================================================>] 2.856kB/2.856kB e8bf24a82546 Extracting [============================> ] 102.5MB/180.3MB 00b33c871d26 Extracting [==============================> ] 156MB/253.3MB 00b33c871d26 Extracting [==============================> ] 156MB/253.3MB 806be17e856d Extracting [======================================> ] 69.07MB/89.72MB 20a0a091aeb6 Pull complete 4c3134cef26d Extracting [==================================================>] 2.862kB/2.862kB 4c3134cef26d Extracting [==================================================>] 2.862kB/2.862kB e8bf24a82546 Extracting [=============================> ] 107.5MB/180.3MB 00b33c871d26 Extracting [===============================> ] 160.4MB/253.3MB 00b33c871d26 Extracting [===============================> ] 160.4MB/253.3MB 806be17e856d Extracting [========================================> ] 72.42MB/89.72MB e8bf24a82546 Extracting [==============================> ] 111.4MB/180.3MB 806be17e856d Extracting [==========================================> ] 76.32MB/89.72MB 4c3134cef26d Pull complete 00b33c871d26 Extracting [================================> ] 165.4MB/253.3MB 00b33c871d26 Extracting [================================> ] 165.4MB/253.3MB policy-db-migrator Pulled e8bf24a82546 Extracting [===============================> ] 114.8MB/180.3MB 806be17e856d Extracting [=============================================> ] 82.44MB/89.72MB 00b33c871d26 Extracting [=================================> ] 168.8MB/253.3MB 00b33c871d26 Extracting [=================================> ] 168.8MB/253.3MB e8bf24a82546 Extracting [=================================> ] 119.8MB/180.3MB 00b33c871d26 Extracting [=================================> ] 170.5MB/253.3MB 00b33c871d26 Extracting [=================================> ] 170.5MB/253.3MB 806be17e856d Extracting [===============================================> ] 85.23MB/89.72MB e8bf24a82546 Extracting [=================================> ] 122.6MB/180.3MB 806be17e856d Extracting [================================================> ] 86.9MB/89.72MB 00b33c871d26 Extracting [=================================> ] 172.1MB/253.3MB 00b33c871d26 Extracting [=================================> ] 172.1MB/253.3MB e8bf24a82546 Extracting [===================================> ] 126.5MB/180.3MB 00b33c871d26 Extracting [==================================> ] 172.7MB/253.3MB 00b33c871d26 Extracting [==================================> ] 172.7MB/253.3MB 806be17e856d Extracting [=================================================> ] 88.57MB/89.72MB e8bf24a82546 Extracting [====================================> ] 130.4MB/180.3MB 806be17e856d Extracting [=================================================> ] 89.13MB/89.72MB e8bf24a82546 Extracting [=====================================> ] 134.3MB/180.3MB 00b33c871d26 Extracting [==================================> ] 173.8MB/253.3MB 00b33c871d26 Extracting [==================================> ] 173.8MB/253.3MB e8bf24a82546 Extracting [======================================> ] 138.7MB/180.3MB 00b33c871d26 Extracting [==================================> ] 175.5MB/253.3MB 00b33c871d26 Extracting [==================================> ] 175.5MB/253.3MB 806be17e856d Extracting [=================================================> ] 89.69MB/89.72MB 806be17e856d Extracting [==================================================>] 89.72MB/89.72MB e8bf24a82546 Extracting [=======================================> ] 142MB/180.3MB 00b33c871d26 Extracting [===================================> ] 177.7MB/253.3MB 00b33c871d26 Extracting [===================================> ] 177.7MB/253.3MB e8bf24a82546 Extracting [========================================> ] 144.8MB/180.3MB 806be17e856d Pull complete 00b33c871d26 Extracting [====================================> ] 182.7MB/253.3MB 00b33c871d26 Extracting [====================================> ] 182.7MB/253.3MB 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB e8bf24a82546 Extracting [=========================================> ] 149.3MB/180.3MB 00b33c871d26 Extracting [====================================> ] 186.6MB/253.3MB 00b33c871d26 Extracting [====================================> ] 186.6MB/253.3MB e8bf24a82546 Extracting [===========================================> ] 156MB/180.3MB 00b33c871d26 Extracting [=====================================> ] 188.8MB/253.3MB 00b33c871d26 Extracting [=====================================> ] 188.8MB/253.3MB e8bf24a82546 Extracting [============================================> ] 159.9MB/180.3MB 00b33c871d26 Extracting [=====================================> ] 191.1MB/253.3MB 00b33c871d26 Extracting [=====================================> ] 191.1MB/253.3MB 00b33c871d26 Extracting [======================================> ] 192.7MB/253.3MB 00b33c871d26 Extracting [======================================> ] 192.7MB/253.3MB e8bf24a82546 Extracting [=============================================> ] 163.2MB/180.3MB e8bf24a82546 Extracting [=============================================> ] 164.9MB/180.3MB 634de6c90876 Pull complete 00b33c871d26 Extracting [======================================> ] 195.5MB/253.3MB 00b33c871d26 Extracting [======================================> ] 195.5MB/253.3MB cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB e8bf24a82546 Extracting [==============================================> ] 169.3MB/180.3MB 00b33c871d26 Extracting [=======================================> ] 197.8MB/253.3MB 00b33c871d26 Extracting [=======================================> ] 197.8MB/253.3MB e8bf24a82546 Extracting [===============================================> ] 171.6MB/180.3MB 00b33c871d26 Extracting [=======================================> ] 199.4MB/253.3MB 00b33c871d26 Extracting [=======================================> ] 199.4MB/253.3MB e8bf24a82546 Extracting [================================================> ] 173.2MB/180.3MB 00b33c871d26 Extracting [=======================================> ] 201.7MB/253.3MB 00b33c871d26 Extracting [=======================================> ] 201.7MB/253.3MB e8bf24a82546 Extracting [================================================> ] 174.4MB/180.3MB e8bf24a82546 Extracting [================================================> ] 175.5MB/180.3MB e8bf24a82546 Extracting [================================================> ] 176MB/180.3MB 00b33c871d26 Extracting [========================================> ] 202.8MB/253.3MB 00b33c871d26 Extracting [========================================> ] 202.8MB/253.3MB e8bf24a82546 Extracting [=================================================> ] 177.7MB/180.3MB 00b33c871d26 Extracting [========================================> ] 204.4MB/253.3MB 00b33c871d26 Extracting [========================================> ] 204.4MB/253.3MB e8bf24a82546 Extracting [=================================================> ] 179.4MB/180.3MB 00b33c871d26 Extracting [========================================> ] 207.2MB/253.3MB 00b33c871d26 Extracting [========================================> ] 207.2MB/253.3MB e8bf24a82546 Extracting [==================================================>] 180.3MB/180.3MB 00b33c871d26 Extracting [=========================================> ] 211.1MB/253.3MB 00b33c871d26 Extracting [=========================================> ] 211.1MB/253.3MB 00b33c871d26 Extracting [==========================================> ] 215MB/253.3MB 00b33c871d26 Extracting [==========================================> ] 215MB/253.3MB 00b33c871d26 Extracting [==========================================> ] 217.8MB/253.3MB 00b33c871d26 Extracting [==========================================> ] 217.8MB/253.3MB 00b33c871d26 Extracting [===========================================> ] 219.5MB/253.3MB 00b33c871d26 Extracting [===========================================> ] 219.5MB/253.3MB cd00854cfb1a Pull complete 00b33c871d26 Extracting [===========================================> ] 221.2MB/253.3MB 00b33c871d26 Extracting [===========================================> ] 221.2MB/253.3MB 00b33c871d26 Extracting [============================================> ] 223.9MB/253.3MB 00b33c871d26 Extracting [============================================> ] 223.9MB/253.3MB 00b33c871d26 Extracting [=============================================> ] 229MB/253.3MB 00b33c871d26 Extracting [=============================================> ] 229MB/253.3MB 00b33c871d26 Extracting [=============================================> ] 232.3MB/253.3MB 00b33c871d26 Extracting [=============================================> ] 232.3MB/253.3MB 00b33c871d26 Extracting [==============================================> ] 237.3MB/253.3MB 00b33c871d26 Extracting [==============================================> ] 237.3MB/253.3MB e8bf24a82546 Pull complete 00b33c871d26 Extracting [===============================================> ] 241.2MB/253.3MB 00b33c871d26 Extracting [===============================================> ] 241.2MB/253.3MB 00b33c871d26 Extracting [=================================================> ] 248.4MB/253.3MB 00b33c871d26 Extracting [=================================================> ] 248.4MB/253.3MB 00b33c871d26 Extracting [=================================================> ] 252.3MB/253.3MB 00b33c871d26 Extracting [=================================================> ] 252.3MB/253.3MB 00b33c871d26 Extracting [==================================================>] 253.3MB/253.3MB 00b33c871d26 Extracting [==================================================>] 253.3MB/253.3MB 154b803e2d93 Extracting [===================> ] 32.77kB/84.13kB 154b803e2d93 Extracting [==================================================>] 84.13kB/84.13kB 154b803e2d93 Extracting [==================================================>] 84.13kB/84.13kB mariadb Pulled 154b803e2d93 Pull complete 00b33c871d26 Pull complete 00b33c871d26 Pull complete e4305231c991 Extracting [==================================================>] 92B/92B e4305231c991 Extracting [==================================================>] 92B/92B 6b11e56702ad Extracting [> ] 98.3kB/7.707MB 6b11e56702ad Extracting [> ] 98.3kB/7.707MB 6b11e56702ad Extracting [=====================> ] 3.342MB/7.707MB 6b11e56702ad Extracting [=====================> ] 3.342MB/7.707MB 6b11e56702ad Extracting [==================================================>] 7.707MB/7.707MB 6b11e56702ad Extracting [==================================================>] 7.707MB/7.707MB e4305231c991 Pull complete 6b11e56702ad Pull complete 6b11e56702ad Pull complete f469048fbe8d Extracting [==================================================>] 92B/92B f469048fbe8d Extracting [==================================================>] 92B/92B 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB f469048fbe8d Pull complete 54aeda60d8bb Extracting [> ] 524.3kB/50.21MB 54aeda60d8bb Extracting [===> ] 3.67MB/50.21MB 54aeda60d8bb Extracting [=======> ] 7.864MB/50.21MB 54aeda60d8bb Extracting [============> ] 12.58MB/50.21MB 53d69aa7d3fc Pull complete 53d69aa7d3fc Pull complete 54aeda60d8bb Extracting [==============> ] 14.68MB/50.21MB 54aeda60d8bb Extracting [=================> ] 17.3MB/50.21MB 54aeda60d8bb Extracting [===================> ] 19.92MB/50.21MB 54aeda60d8bb Extracting [======================> ] 22.54MB/50.21MB 54aeda60d8bb Extracting [=========================> ] 25.17MB/50.21MB 54aeda60d8bb Extracting [================================> ] 32.51MB/50.21MB 54aeda60d8bb Extracting [====================================> ] 36.7MB/50.21MB 54aeda60d8bb Extracting [======================================> ] 38.8MB/50.21MB 54aeda60d8bb Extracting [========================================> ] 40.89MB/50.21MB 54aeda60d8bb Extracting [============================================> ] 44.56MB/50.21MB 54aeda60d8bb Extracting [===============================================> ] 47.71MB/50.21MB 54aeda60d8bb Extracting [==================================================>] 50.21MB/50.21MB a3ab11953ef9 Extracting [> ] 426kB/39.52MB a3ab11953ef9 Extracting [> ] 426kB/39.52MB a3ab11953ef9 Extracting [==================> ] 14.91MB/39.52MB a3ab11953ef9 Extracting [==================> ] 14.91MB/39.52MB a3ab11953ef9 Extracting [==========================================> ] 33.65MB/39.52MB a3ab11953ef9 Extracting [==========================================> ] 33.65MB/39.52MB a3ab11953ef9 Extracting [==================================================>] 39.52MB/39.52MB a3ab11953ef9 Extracting [==================================================>] 39.52MB/39.52MB a3ab11953ef9 Pull complete 54aeda60d8bb Pull complete a3ab11953ef9 Pull complete 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB b96ca003f2ab Extracting [==================================================>] 372B/372B b96ca003f2ab Extracting [==================================================>] 372B/372B 91ef9543149d Pull complete 91ef9543149d Pull complete b96ca003f2ab Pull complete 2ec4f59af178 Extracting [==================================================>] 881B/881B 2ec4f59af178 Extracting [==================================================>] 881B/881B 2ec4f59af178 Extracting [==================================================>] 881B/881B 2ec4f59af178 Extracting [==================================================>] 881B/881B 6c22ee15992d Extracting [> ] 557.1kB/96.48MB 2ec4f59af178 Pull complete 2ec4f59af178 Pull complete 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 6c22ee15992d Extracting [=======> ] 15.04MB/96.48MB 8b7e81cd5ef1 Pull complete 8b7e81cd5ef1 Pull complete c52916c1316e Extracting [==================================================>] 171B/171B c52916c1316e Extracting [==================================================>] 171B/171B c52916c1316e Extracting [==================================================>] 171B/171B c52916c1316e Extracting [==================================================>] 171B/171B 6c22ee15992d Extracting [=================> ] 33.42MB/96.48MB 6c22ee15992d Extracting [===========================> ] 52.36MB/96.48MB c52916c1316e Pull complete c52916c1316e Pull complete d93f69e96600 Extracting [> ] 557.1kB/115.2MB 6c22ee15992d Extracting [====================================> ] 69.63MB/96.48MB 7a1cb9ad7f75 Extracting [> ] 557.1kB/115.2MB d93f69e96600 Extracting [=====> ] 12.26MB/115.2MB 6c22ee15992d Extracting [============================================> ] 86.34MB/96.48MB 7a1cb9ad7f75 Extracting [=====> ] 12.81MB/115.2MB 6c22ee15992d Extracting [==================================================>] 96.48MB/96.48MB d93f69e96600 Extracting [===========> ] 27.3MB/115.2MB 7a1cb9ad7f75 Extracting [==========> ] 23.95MB/115.2MB 6c22ee15992d Pull complete d93f69e96600 Extracting [================> ] 37.32MB/115.2MB 7a1cb9ad7f75 Extracting [================> ] 38.99MB/115.2MB fc6d5acc9ab8 Extracting [> ] 557.1kB/97.42MB d93f69e96600 Extracting [======================> ] 52.36MB/115.2MB 7a1cb9ad7f75 Extracting [======================> ] 52.92MB/115.2MB fc6d5acc9ab8 Extracting [=====> ] 10.03MB/97.42MB d93f69e96600 Extracting [=============================> ] 67.96MB/115.2MB 7a1cb9ad7f75 Extracting [==============================> ] 70.19MB/115.2MB fc6d5acc9ab8 Extracting [========> ] 17.27MB/97.42MB d93f69e96600 Extracting [====================================> ] 83.56MB/115.2MB 7a1cb9ad7f75 Extracting [====================================> ] 84.12MB/115.2MB fc6d5acc9ab8 Extracting [==============> ] 27.3MB/97.42MB d93f69e96600 Extracting [==========================================> ] 98.04MB/115.2MB 7a1cb9ad7f75 Extracting [===========================================> ] 100.3MB/115.2MB fc6d5acc9ab8 Extracting [===================> ] 38.44MB/97.42MB d93f69e96600 Extracting [===============================================> ] 110.3MB/115.2MB 7a1cb9ad7f75 Extracting [================================================> ] 111.4MB/115.2MB fc6d5acc9ab8 Extracting [=========================> ] 49.58MB/97.42MB d93f69e96600 Extracting [=================================================> ] 113.6MB/115.2MB 7a1cb9ad7f75 Extracting [==================================================>] 115.2MB/115.2MB fc6d5acc9ab8 Extracting [===============================> ] 60.72MB/97.42MB d93f69e96600 Extracting [==================================================>] 115.2MB/115.2MB 7a1cb9ad7f75 Pull complete 0a92c7dea7af Extracting [==================================================>] 3.449kB/3.449kB 0a92c7dea7af Extracting [==================================================>] 3.449kB/3.449kB d93f69e96600 Pull complete bbb9d15c45a1 Extracting [==================================================>] 3.633kB/3.633kB bbb9d15c45a1 Extracting [==================================================>] 3.633kB/3.633kB fc6d5acc9ab8 Extracting [===================================> ] 69.07MB/97.42MB bbb9d15c45a1 Pull complete fc6d5acc9ab8 Extracting [==========================================> ] 83.56MB/97.42MB 0a92c7dea7af Pull complete kafka Pulled zookeeper Pulled fc6d5acc9ab8 Extracting [==================================================>] 97.42MB/97.42MB fc6d5acc9ab8 Pull complete drools-pdp Pulled Network compose_default Creating Network compose_default Created Container zookeeper Creating Container mariadb Creating Container mariadb Created Container policy-db-migrator Creating Container zookeeper Created Container kafka Creating Container kafka Created Container policy-db-migrator Created Container policy-api Creating Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-drools-pdp Creating Container policy-drools-pdp Created Container zookeeper Starting Container mariadb Starting Container mariadb Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container zookeeper Started Container kafka Starting Container policy-api Started Container kafka Started Container policy-pap Starting Container policy-pap Started Container policy-drools-pdp Starting Container policy-drools-pdp Started Waiting for REST to come up on localhost port 30216... NAMES STATUS policy-drools-pdp Up 30 seconds policy-pap Up 30 seconds policy-api Up 32 seconds kafka Up 31 seconds zookeeper Up 34 seconds mariadb Up 35 seconds Build docker image for robot framework Error: No such image: policy-csit-robot Cloning into '/w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp/csit/resources/tests/models'... Build robot framework docker image Sending build context to Docker daemon 16.49MB Step 1/9 : FROM nexus3.onap.org:10001/library/python:3.10-slim-bullseye 3.10-slim-bullseye: Pulling from library/python 76956b537f14: Pulling fs layer f75f1b8a4051: Pulling fs layer f9adc358e0b8: Pulling fs layer f66e101ef41f: Pulling fs layer b913137adf9e: Pulling fs layer f66e101ef41f: Waiting b913137adf9e: Waiting f75f1b8a4051: Download complete f66e101ef41f: Verifying Checksum f66e101ef41f: Download complete b913137adf9e: Verifying Checksum b913137adf9e: Download complete f9adc358e0b8: Verifying Checksum f9adc358e0b8: Download complete 76956b537f14: Verifying Checksum 76956b537f14: Download complete 76956b537f14: Pull complete f75f1b8a4051: Pull complete f9adc358e0b8: Pull complete f66e101ef41f: Pull complete b913137adf9e: Pull complete Digest: sha256:46193e24d7f1f03f4e2f9e21e1a5f8361ac29c83db447b4d5355fae9445943b0 Status: Downloaded newer image for nexus3.onap.org:10001/library/python:3.10-slim-bullseye ---> 08150e0479fc Step 2/9 : ARG CSIT_SCRIPT=${CSIT_SCRIPT} ---> Running in 5b396e58002b Removing intermediate container 5b396e58002b ---> 5743519c1f19 Step 3/9 : ARG ROBOT_FILE=${ROBOT_FILE} ---> Running in ab4891838629 Removing intermediate container ab4891838629 ---> 8cdeb768d964 Step 4/9 : ENV ROBOT_WORKSPACE=/opt/robotworkspace ROBOT_FILE=$ROBOT_FILE TEST_ENV=$TEST_ENV ---> Running in d8b88cbe79f7 Removing intermediate container d8b88cbe79f7 ---> 3ec3902b903b Step 5/9 : RUN python3 -m pip -qq install --upgrade pip && python3 -m pip -qq install --upgrade --extra-index-url="https://nexus3.onap.org/repository/PyPi.staging/simple" 'robotframework-onap==0.6.0.*' --pre && python3 -m pip -qq install --upgrade confluent-kafka && python3 -m pip freeze ---> Running in 6c1abeafa333 bcrypt==4.1.3 certifi==2024.7.4 cffi==1.17.0rc1 charset-normalizer==3.3.2 confluent-kafka==2.4.0 cryptography==42.0.8 decorator==5.1.1 deepdiff==7.0.1 dnspython==2.6.1 future==1.0.0 idna==3.7 Jinja2==3.1.4 jsonpath-rw==1.4.0 kafka-python==2.0.2 MarkupSafe==2.1.5 more-itertools==5.0.0 ordered-set==4.1.0 paramiko==3.4.0 pbr==6.0.0 ply==3.11 protobuf==5.27.2 pycparser==2.22 PyNaCl==1.5.0 PyYAML==6.0.2rc1 requests==2.32.3 robotframework==7.0.1 robotframework-onap==0.6.0.dev105 robotframework-requests==1.0a11 robotlibcore-temp==1.0.2 six==1.16.0 urllib3==2.2.2 Removing intermediate container 6c1abeafa333 ---> 28848a22d4eb Step 6/9 : RUN mkdir -p ${ROBOT_WORKSPACE} ---> Running in 5aba463086ad Removing intermediate container 5aba463086ad ---> 28bd00d8b6ed Step 7/9 : COPY scripts/run-test.sh tests/ ${ROBOT_WORKSPACE}/ ---> 5e5dd22a39ed Step 8/9 : WORKDIR ${ROBOT_WORKSPACE} ---> Running in ba73be98bc0d Removing intermediate container ba73be98bc0d ---> 06cff87209ef Step 9/9 : CMD ["sh", "-c", "./run-test.sh" ] ---> Running in 13e445715786 Removing intermediate container 13e445715786 ---> c595cc252ffd Successfully built c595cc252ffd Successfully tagged policy-csit-robot:latest top - 13:23:30 up 3 min, 0 users, load average: 2.80, 1.36, 0.53 Tasks: 195 total, 1 running, 119 sleeping, 0 stopped, 0 zombie %Cpu(s): 14.2 us, 3.1 sy, 0.0 ni, 77.0 id, 5.6 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.6G 23G 1.1M 5.7G 28G Swap: 1.0G 0B 1.0G NAMES STATUS policy-drools-pdp Up 58 seconds policy-pap Up 59 seconds policy-api Up About a minute kafka Up About a minute zookeeper Up About a minute mariadb Up About a minute CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS fe28b38a1f6f policy-drools-pdp 0.62% 226.1MiB / 31.41GiB 0.70% 29.7kB / 37kB 0B / 8.19kB 53 2545c58d0094 policy-pap 1.74% 496.7MiB / 31.41GiB 1.54% 39.9kB / 44.1kB 0B / 149MB 63 1dd156539fe5 policy-api 0.15% 570.8MiB / 31.41GiB 1.77% 988kB / 646kB 0B / 0B 53 7348e18c76e6 kafka 4.44% 394.4MiB / 31.41GiB 1.23% 113kB / 106kB 0B / 549kB 85 cb6c485b4767 zookeeper 0.16% 100.3MiB / 31.41GiB 0.31% 54.5kB / 48.4kB 201kB / 385kB 59 07e1a21f8089 mariadb 0.02% 102.8MiB / 31.41GiB 0.32% 936kB / 1.18MB 10.9MB / 71.2MB 36 time="2024-07-04T13:23:33Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: drools-pdp-test.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV: policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Drools-Pdp-Test policy-csit | ============================================================================== policy-csit | Alive :: Runs Policy PDP Alive Check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify drools-pdp is exporting metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Drools-Pdp-Test | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 NAMES STATUS policy-drools-pdp Up About a minute policy-pap Up About a minute policy-api Up About a minute kafka Up About a minute zookeeper Up About a minute mariadb Up About a minute Shut down started! Collecting logs from docker compose containers... time="2024-07-04T13:23:36Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-04T13:23:36Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-04T13:23:37Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-04T13:23:37Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-04T13:23:37Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-04T13:23:38Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-04T13:23:38Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-04T13:23:38Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-04T13:23:39Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." ======== Logs from kafka ======== kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-07-04 13:22:34,377] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,378] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,378] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,378] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,378] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,378] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,378] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,378] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,378] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,378] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,378] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,378] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,378] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,379] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,379] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,379] INFO Client environment:os.memory.free=494MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,379] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,379] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,382] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@61d47554 (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,385] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-07-04 13:22:34,390] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-07-04 13:22:34,398] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-04 13:22:34,416] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-04 13:22:34,417] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-04 13:22:34,426] INFO Socket connection established, initiating session, client: /172.17.0.6:34202, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-04 13:22:34,463] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x100000282160000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-04 13:22:34,581] INFO Session: 0x100000282160000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:34,581] INFO EventThread shut down for session: 0x100000282160000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-07-04 13:22:35,283] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-07-04 13:22:35,623] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-07-04 13:22:35,693] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-07-04 13:22:35,694] INFO starting (kafka.server.KafkaServer) kafka | [2024-07-04 13:22:35,694] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-07-04 13:22:35,707] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-07-04 13:22:35,711] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,711] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,712] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,712] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,712] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,712] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,712] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,712] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,712] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,712] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,713] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,713] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,713] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,713] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,713] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,713] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,713] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,713] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,715] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@447a020 (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-04 13:22:35,718] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-07-04 13:22:35,724] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-04 13:22:35,725] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-07-04 13:22:35,734] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-04 13:22:35,741] INFO Socket connection established, initiating session, client: /172.17.0.6:34204, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-04 13:22:35,752] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x100000282160001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-04 13:22:35,761] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-07-04 13:22:36,037] INFO Cluster ID = s0RFBie2SzS4BKGvbus8AQ (kafka.server.KafkaServer) kafka | [2024-07-04 13:22:36,040] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-07-04 13:22:36,087] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.6-IV2 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.local.retention.bytes = -2 kafka | log.local.retention.ms = -2 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = rsm.config. kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | server.max.startup.time.ms = 9223372036854775807 kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.partition.verification.enable = true kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | unstable.api.versions.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.metadata.migration.min.batch.size = 200 kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2024-07-04 13:22:36,115] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-07-04 13:22:36,115] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-07-04 13:22:36,116] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-07-04 13:22:36,118] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-07-04 13:22:36,147] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2024-07-04 13:22:36,152] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) kafka | [2024-07-04 13:22:36,163] INFO Loaded 0 logs in 16ms (kafka.log.LogManager) kafka | [2024-07-04 13:22:36,165] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2024-07-04 13:22:36,167] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2024-07-04 13:22:36,177] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2024-07-04 13:22:36,221] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) kafka | [2024-07-04 13:22:36,253] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2024-07-04 13:22:36,269] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-07-04 13:22:36,294] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-07-04 13:22:36,605] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-07-04 13:22:36,624] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2024-07-04 13:22:36,624] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-07-04 13:22:36,629] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2024-07-04 13:22:36,633] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-07-04 13:22:36,656] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-04 13:22:36,657] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-04 13:22:36,660] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-04 13:22:36,661] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-04 13:22:36,666] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-04 13:22:36,676] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2024-07-04 13:22:36,677] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) kafka | [2024-07-04 13:22:36,702] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2024-07-04 13:22:36,728] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1720099356717,1720099356717,1,0,0,72057604810342401,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2024-07-04 13:22:36,730] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2024-07-04 13:22:36,784] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2024-07-04 13:22:36,791] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-04 13:22:36,798] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-04 13:22:36,798] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-04 13:22:36,807] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2024-07-04 13:22:36,814] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:36,822] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,822] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:36,828] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,833] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-07-04 13:22:36,841] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-07-04 13:22:36,845] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2024-07-04 13:22:36,845] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-07-04 13:22:36,859] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,859] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2024-07-04 13:22:36,866] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,872] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,874] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,883] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-04 13:22:36,891] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,898] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,905] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2024-07-04 13:22:36,917] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2024-07-04 13:22:36,918] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,918] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,919] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2024-07-04 13:22:36,919] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,919] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,922] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,922] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,923] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,923] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2024-07-04 13:22:36,924] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,929] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2024-07-04 13:22:36,930] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2024-07-04 13:22:36,934] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2024-07-04 13:22:36,937] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2024-07-04 13:22:36,937] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2024-07-04 13:22:36,938] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-07-04 13:22:36,942] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-07-04 13:22:36,942] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2024-07-04 13:22:36,945] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2024-07-04 13:22:36,946] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2024-07-04 13:22:36,949] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-07-04 13:22:36,949] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-07-04 13:22:36,949] INFO Kafka startTimeMs: 1720099356941 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-07-04 13:22:36,953] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2024-07-04 13:22:36,953] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2024-07-04 13:22:36,954] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,960] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2024-07-04 13:22:36,961] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,961] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,961] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,962] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,963] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:36,977] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:37,026] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-07-04 13:22:37,041] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2024-07-04 13:22:37,106] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2024-07-04 13:22:40,248] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-07-04 13:22:40,256] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-07-04 13:22:40,346] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(o5Jk9k0ESK2jx6sCMQn8ZQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(jypnRuU1QBmSqzu_3-rzMA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:40,347] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:40,355] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,355] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,355] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,355] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,355] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,355] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,355] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,355] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,355] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,355] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,355] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,355] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,355] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,355] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-04 13:22:40,357] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-07-04 13:22:40,362] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,362] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,362] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,363] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-04 13:22:40,364] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-07-04 13:22:40,547] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,547] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,547] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,547] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,547] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,547] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,547] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,547] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,549] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,550] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,550] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,550] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,550] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,550] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,550] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,550] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,550] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,550] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,550] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,550] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-04 13:22:40,553] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-07-04 13:22:40,553] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-07-04 13:22:40,553] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-07-04 13:22:40,553] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-07-04 13:22:40,553] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-07-04 13:22:40,553] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-07-04 13:22:40,553] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-07-04 13:22:40,553] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-07-04 13:22:40,553] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-07-04 13:22:40,553] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-07-04 13:22:40,554] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-07-04 13:22:40,555] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-07-04 13:22:40,555] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-07-04 13:22:40,555] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-07-04 13:22:40,555] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-07-04 13:22:40,555] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-07-04 13:22:40,555] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-07-04 13:22:40,555] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-07-04 13:22:40,555] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-07-04 13:22:40,555] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-07-04 13:22:40,555] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-07-04 13:22:40,555] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-07-04 13:22:40,555] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-07-04 13:22:40,555] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-07-04 13:22:40,555] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-07-04 13:22:40,555] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-07-04 13:22:40,555] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-07-04 13:22:40,556] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2024-07-04 13:22:40,558] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,560] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-04 13:22:40,561] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-07-04 13:22:40,567] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:40,572] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-07-04 13:22:40,573] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,573] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,574] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-07-04 13:22:40,622] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-07-04 13:22:40,623] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-07-04 13:22:40,624] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-07-04 13:22:40,624] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-07-04 13:22:40,624] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-07-04 13:22:40,627] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2024-07-04 13:22:40,627] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2024-07-04 13:22:40,678] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,689] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,692] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,693] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,695] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,711] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,712] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,712] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,712] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,712] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,719] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,719] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,719] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,720] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,720] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,732] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,732] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,732] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,733] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,733] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,741] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,742] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,743] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,743] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,743] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,752] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,753] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,753] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,753] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,753] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,762] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,763] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,763] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,763] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,763] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,785] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,786] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,787] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,787] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,787] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,793] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,793] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,794] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,794] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,794] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,800] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,801] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,801] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,801] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,801] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,807] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,807] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,807] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,807] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,808] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,813] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,814] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,814] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,814] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,814] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,823] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,824] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,825] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,825] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,825] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,834] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,835] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,835] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,835] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,836] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,842] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,843] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,843] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,843] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,843] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,849] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,850] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,850] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,850] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,850] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,856] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,856] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,857] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,857] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,857] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,864] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,864] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,864] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,864] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,864] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,869] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,870] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,870] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,870] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,870] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,877] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,877] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,878] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,878] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,878] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,884] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,884] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,885] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,885] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,885] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,891] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,891] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,891] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,891] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,892] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,898] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,898] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,898] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,898] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,898] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,904] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,905] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,905] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,905] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,905] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,912] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,912] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,912] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,912] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,912] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,920] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,920] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,920] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,920] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,920] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,927] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,928] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,928] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,928] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,928] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,935] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,935] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,935] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,935] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,935] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,943] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,944] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,944] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,944] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,944] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,953] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,954] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,954] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,954] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,954] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,963] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,965] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,965] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,965] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,965] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,971] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,971] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,971] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,971] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,971] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,978] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,979] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,979] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,979] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,979] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,986] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,986] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,986] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,986] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,987] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:40,993] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:40,993] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2024-07-04 13:22:40,994] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,994] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:40,994] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(o5Jk9k0ESK2jx6sCMQn8ZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:41,000] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:41,001] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:41,002] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,002] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,002] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:41,014] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:41,015] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:41,015] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,015] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,015] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:41,027] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:41,028] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:41,028] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,028] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,028] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:41,036] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:41,036] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:41,036] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,037] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,037] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:41,046] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:41,047] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:41,047] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,047] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,047] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:41,055] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:41,056] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:41,056] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,056] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,056] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:41,066] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:41,067] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:41,067] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,067] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,068] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:41,075] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:41,076] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:41,076] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,076] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,076] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:41,082] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:41,082] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:41,082] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,082] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,082] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:41,089] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:41,089] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:41,089] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,090] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,090] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:41,095] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:41,096] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:41,096] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,096] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,096] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:41,103] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:41,104] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:41,104] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,104] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,104] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:41,110] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:41,111] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:41,111] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,111] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,111] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:41,120] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:41,120] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:41,121] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,121] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,121] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:41,128] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:41,128] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:41,128] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,128] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,128] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:41,135] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-04 13:22:41,136] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-04 13:22:41,136] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,136] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-04 13:22:41,136] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(jypnRuU1QBmSqzu_3-rzMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-07-04 13:22:41,142] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-07-04 13:22:41,143] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-07-04 13:22:41,151] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,154] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,158] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,159] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,159] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,159] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,159] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,159] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,159] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,159] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,159] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,159] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,159] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,159] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,159] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,159] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,159] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,159] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,159] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,159] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,159] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,159] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,159] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,159] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,159] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,159] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,159] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,159] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,159] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,159] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,161] INFO [Broker id=1] Finished LeaderAndIsr request in 590ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-07-04 13:22:41,166] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=jypnRuU1QBmSqzu_3-rzMA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=o5Jk9k0ESK2jx6sCMQn8ZQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-07-04 13:22:41,167] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 11 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,168] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,169] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,169] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,169] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,169] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,174] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,174] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,174] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,175] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,175] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,175] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,175] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,175] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,175] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,175] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,175] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,175] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,175] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,175] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,176] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,177] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,177] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 19 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,177] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,177] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,177] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,177] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,177] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,177] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,177] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,177] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,177] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,177] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,178] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,178] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,178] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,178] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,178] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,178] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,178] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,178] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,178] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,178] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,178] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,178] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,178] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,178] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,179] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 21 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,179] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,179] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,179] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,179] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,179] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,179] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,179] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,179] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,179] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,179] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,179] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,179] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,179] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,180] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,180] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,180] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,180] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,180] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,180] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,180] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,180] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,180] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,180] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,180] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,180] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,180] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,180] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,181] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-04 13:22:41,182] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-07-04 13:22:41,179] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,184] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 26 milliseconds for epoch 0, of which 26 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,184] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 26 milliseconds for epoch 0, of which 26 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,184] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 26 milliseconds for epoch 0, of which 26 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 27 milliseconds for epoch 0, of which 26 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 29 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 29 milliseconds for epoch 0, of which 29 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 29 milliseconds for epoch 0, of which 29 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 29 milliseconds for epoch 0, of which 29 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 30 milliseconds for epoch 0, of which 29 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 30 milliseconds for epoch 0, of which 30 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,190] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 31 milliseconds for epoch 0, of which 30 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-04 13:22:41,310] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 505d9243-c3c0-46d4-9716-d160b5fb68fe in Empty state. Created a new member id consumer-505d9243-c3c0-46d4-9716-d160b5fb68fe-2-901fc659-928c-4c7c-8a49-187d2fdaf385 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,329] INFO [GroupCoordinator 1]: Preparing to rebalance group 505d9243-c3c0-46d4-9716-d160b5fb68fe in state PreparingRebalance with old generation 0 (__consumer_offsets-11) (reason: Adding new member consumer-505d9243-c3c0-46d4-9716-d160b5fb68fe-2-901fc659-928c-4c7c-8a49-187d2fdaf385 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:41,981] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:41,981] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:41,985] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 HashMap() (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:41,985] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController) kafka | [2024-07-04 13:22:44,346] INFO [GroupCoordinator 1]: Stabilized group 505d9243-c3c0-46d4-9716-d160b5fb68fe generation 1 (__consumer_offsets-11) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:22:44,371] INFO [GroupCoordinator 1]: Assignment received from leader consumer-505d9243-c3c0-46d4-9716-d160b5fb68fe-2-901fc659-928c-4c7c-8a49-187d2fdaf385 for group 505d9243-c3c0-46d4-9716-d160b5fb68fe for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:23:04,499] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-d7828d8d-8e7b-411c-8926-cc233d84c065 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:23:04,500] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 934db71c-a64d-4f52-8a46-845600f629f9 in Empty state. Created a new member id consumer-934db71c-a64d-4f52-8a46-845600f629f9-3-0fc77827-2f5e-4a80-a6e0-2f4d6e786b7b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:23:04,503] INFO [GroupCoordinator 1]: Preparing to rebalance group 934db71c-a64d-4f52-8a46-845600f629f9 in state PreparingRebalance with old generation 0 (__consumer_offsets-39) (reason: Adding new member consumer-934db71c-a64d-4f52-8a46-845600f629f9-3-0fc77827-2f5e-4a80-a6e0-2f4d6e786b7b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:23:04,504] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-d7828d8d-8e7b-411c-8926-cc233d84c065 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:23:07,504] INFO [GroupCoordinator 1]: Stabilized group 934db71c-a64d-4f52-8a46-845600f629f9 generation 1 (__consumer_offsets-39) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:23:07,515] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:23:07,548] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-d7828d8d-8e7b-411c-8926-cc233d84c065 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-04 13:23:07,548] INFO [GroupCoordinator 1]: Assignment received from leader consumer-934db71c-a64d-4f52-8a46-845600f629f9-3-0fc77827-2f5e-4a80-a6e0-2f4d6e786b7b for group 934db71c-a64d-4f52-8a46-845600f629f9 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) =================================== ======== Logs from mariadb ======== mariadb | 2024-07-04 13:22:26+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-07-04 13:22:26+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-07-04 13:22:26+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-07-04 13:22:26+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-07-04 13:22:26 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-07-04 13:22:26 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-07-04 13:22:26 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-07-04 13:22:28+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-07-04 13:22:28+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-07-04 13:22:28+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-07-04 13:22:28 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 98 ... mariadb | 2024-07-04 13:22:28 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-07-04 13:22:28 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-07-04 13:22:28 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-07-04 13:22:28 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-07-04 13:22:28 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-07-04 13:22:28 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-07-04 13:22:28 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-07-04 13:22:28 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-07-04 13:22:28 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-07-04 13:22:28 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-07-04 13:22:28 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-07-04 13:22:28 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-07-04 13:22:28 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-07-04 13:22:28 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-07-04 13:22:28 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-07-04 13:22:28 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-07-04 13:22:28 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-07-04 13:22:28 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-07-04 13:22:29+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-07-04 13:22:30+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-07-04 13:22:30+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | mariadb | 2024-07-04 13:22:31+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | 2024-07-04 13:22:31+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-07-04 13:22:31+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-07-04 13:22:31 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-07-04 13:22:31 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-07-04 13:22:31 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-07-04 13:22:31 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-07-04 13:22:31 0 [Note] InnoDB: Buffer pool(s) dump completed at 240704 13:22:31 mariadb | 2024-07-04 13:22:32 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-07-04 13:22:32 0 [Note] InnoDB: Shutdown completed; log sequence number 366741; transaction id 298 mariadb | 2024-07-04 13:22:32 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-07-04 13:22:32+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-07-04 13:22:32+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-07-04 13:22:32 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2024-07-04 13:22:32 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-07-04 13:22:32 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-07-04 13:22:32 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-07-04 13:22:32 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-07-04 13:22:32 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-07-04 13:22:32 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-07-04 13:22:32 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-07-04 13:22:32 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-07-04 13:22:32 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-07-04 13:22:32 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-07-04 13:22:32 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-07-04 13:22:32 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-07-04 13:22:32 0 [Note] InnoDB: log sequence number 366741; transaction id 299 mariadb | 2024-07-04 13:22:32 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-07-04 13:22:32 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-07-04 13:22:32 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-07-04 13:22:32 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. mariadb | 2024-07-04 13:22:32 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2024-07-04 13:22:32 0 [Note] Server socket created on IP: '::'. mariadb | 2024-07-04 13:22:32 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2024-07-04 13:22:32 0 [Note] InnoDB: Buffer pool(s) load completed at 240704 13:22:32 mariadb | 2024-07-04 13:22:32 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) mariadb | 2024-07-04 13:22:32 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) mariadb | 2024-07-04 13:22:32 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.5' (This connection closed normally without authentication) mariadb | 2024-07-04 13:22:33 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.4' (This connection closed normally without authentication) mariadb | 2024-07-04 13:22:37 212 [Warning] Aborted connection 212 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) =================================== ======== Logs from api ======== policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.3:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.4:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.10) policy-api | policy-api | [2024-07-04T13:22:41.585+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-api | [2024-07-04T13:22:41.650+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 20 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-07-04T13:22:41.652+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-07-04T13:22:43.738+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-07-04T13:22:43.967+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 219 ms. Found 6 JPA repository interfaces. policy-api | [2024-07-04T13:22:44.850+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-07-04T13:22:44.865+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-07-04T13:22:44.868+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-07-04T13:22:44.868+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-api | [2024-07-04T13:22:44.981+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-07-04T13:22:44.982+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3251 ms policy-api | [2024-07-04T13:22:45.337+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-07-04T13:22:45.427+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final policy-api | [2024-07-04T13:22:45.469+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-07-04T13:22:45.773+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-07-04T13:22:45.807+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-07-04T13:22:45.911+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4d3ca6c7 policy-api | [2024-07-04T13:22:45.914+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-07-04T13:22:48.325+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-07-04T13:22:48.328+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-07-04T13:22:49.060+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-07-04T13:22:49.903+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-07-04T13:22:50.942+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-07-04T13:22:51.141+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@1ee47336, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@5f18f8a1, org.springframework.security.web.context.SecurityContextHolderFilter@3d62648d, org.springframework.security.web.header.HeaderWriterFilter@c8d0ea3, org.springframework.security.web.authentication.logout.LogoutFilter@56cd8903, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@112188cc, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@479f5c35, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@194e116d, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@26d63c94, org.springframework.security.web.access.ExceptionTranslationFilter@207283b4, org.springframework.security.web.access.intercept.AuthorizationFilter@531245fe] policy-api | [2024-07-04T13:22:51.880+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-07-04T13:22:51.964+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-07-04T13:22:51.990+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-07-04T13:22:52.013+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.229 seconds (process running for 12.028) =================================== ======== Logs from csit-tests ======== policy-csit | Invoking the robot tests from: drools-pdp-test.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV: policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Drools-Pdp-Test policy-csit | ============================================================================== policy-csit | Alive :: Runs Policy PDP Alive Check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify drools-pdp is exporting metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Drools-Pdp-Test | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 =================================== ======== Logs from policy-db-migrator ======== policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE jpapdpstatistics_enginestats a policy-db-migrator | JOIN pdpstatistics b policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp policy-db-migrator | SET a.id = b.id policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | -------------- policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaproperty policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | -------------- policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | msg policy-db-migrator | upgrade to 1100 completed policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | TRUNCATE TABLE sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE statistics_sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | name version policy-db-migrator | policyadmin 1300 policy-db-migrator | ID script operation from_version to_version tag success atTime policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:33 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:33 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:33 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:33 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:33 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:33 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:33 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:33 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:33 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:33 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:34 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:35 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:36 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:37 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:37 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:37 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:37 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:37 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:37 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:37 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:37 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:37 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:37 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:37 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:37 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:37 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:37 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:37 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:37 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0407241322330800u 1 2024-07-04 13:22:37 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 0407241322330900u 1 2024-07-04 13:22:37 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 0407241322330900u 1 2024-07-04 13:22:37 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 0407241322330900u 1 2024-07-04 13:22:38 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 0407241322330900u 1 2024-07-04 13:22:38 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 0407241322330900u 1 2024-07-04 13:22:38 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 0407241322330900u 1 2024-07-04 13:22:38 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0407241322330900u 1 2024-07-04 13:22:38 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0407241322330900u 1 2024-07-04 13:22:38 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0407241322330900u 1 2024-07-04 13:22:38 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 0407241322330900u 1 2024-07-04 13:22:38 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 0407241322330900u 1 2024-07-04 13:22:38 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 0407241322330900u 1 2024-07-04 13:22:38 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 0407241322330900u 1 2024-07-04 13:22:38 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 0407241322331000u 1 2024-07-04 13:22:38 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 0407241322331000u 1 2024-07-04 13:22:38 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 0407241322331000u 1 2024-07-04 13:22:38 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 0407241322331000u 1 2024-07-04 13:22:38 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 0407241322331000u 1 2024-07-04 13:22:38 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 0407241322331000u 1 2024-07-04 13:22:38 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 0407241322331000u 1 2024-07-04 13:22:38 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 0407241322331000u 1 2024-07-04 13:22:38 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 0407241322331000u 1 2024-07-04 13:22:39 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 0407241322331100u 1 2024-07-04 13:22:39 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 0407241322331200u 1 2024-07-04 13:22:39 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 0407241322331200u 1 2024-07-04 13:22:39 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 0407241322331200u 1 2024-07-04 13:22:39 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 0407241322331200u 1 2024-07-04 13:22:39 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 0407241322331300u 1 2024-07-04 13:22:39 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 0407241322331300u 1 2024-07-04 13:22:39 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 0407241322331300u 1 2024-07-04 13:22:39 policy-db-migrator | policyadmin: OK @ 1300 =================================== ======== Logs from drools-pdp ======== policy-drools-pdp | Waiting for mariadb port 3306... policy-drools-pdp | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-drools-pdp | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! policy-drools-pdp | Waiting for kafka port 9092... policy-drools-pdp | nc: connect to kafka (172.17.0.6) port 9092 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to kafka (172.17.0.6) port 9092 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to kafka (172.17.0.6) port 9092 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to kafka (172.17.0.6) port 9092 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to kafka (172.17.0.6) port 9092 (tcp) failed: Connection refused policy-drools-pdp | Connection to kafka (172.17.0.6) 9092 port [tcp/*] succeeded! policy-drools-pdp | + operation=boot policy-drools-pdp | + dockerBoot policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- dockerBoot --' policy-drools-pdp | + set -x policy-drools-pdp | + set -e policy-drools-pdp | + configure policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- configure --' policy-drools-pdp | + set -x policy-drools-pdp | + reload policy-drools-pdp | -- /opt/app/policy/bin/pdpd-entrypoint.sh boot -- policy-drools-pdp | -- dockerBoot -- policy-drools-pdp | -- configure -- policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- reload --' policy-drools-pdp | + set -x policy-drools-pdp | + systemConfs policy-drools-pdp | -- reload -- policy-drools-pdp | -- systemConfs -- policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- systemConfs --' policy-drools-pdp | + set -x policy-drools-pdp | + local confName policy-drools-pdp | + ls '/tmp/policy-install/config/*.conf' policy-drools-pdp | + return 0 policy-drools-pdp | + maven policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- maven --' policy-drools-pdp | + set -x policy-drools-pdp | + '[' -f /tmp/policy-install/config/settings.xml ] policy-drools-pdp | -- maven -- policy-drools-pdp | + '[' -f /tmp/policy-install/config/standalone-settings.xml ] policy-drools-pdp | + features policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- features --' policy-drools-pdp | + set -x policy-drools-pdp | -- features -- policy-drools-pdp | + ls '/tmp/policy-install/config/features*.zip' policy-drools-pdp | -- security -- policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + return 0 policy-drools-pdp | + security policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- security --' policy-drools-pdp | + set -x policy-drools-pdp | + '[' -f /tmp/policy-install/config/policy-keystore ] policy-drools-pdp | + '[' -f /tmp/policy-install/config/policy-truststore ] policy-drools-pdp | + serverConfig properties policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=properties' policy-drools-pdp | + ls /tmp/policy-install/config/engine-system.properties policy-drools-pdp | + ls /tmp/policy-install/config/engine-system.properties policy-drools-pdp | configuration properties: /tmp/policy-install/config/engine-system.properties policy-drools-pdp | + echo 'configuration properties: /tmp/policy-install/config/engine-system.properties' policy-drools-pdp | + cp -f /tmp/policy-install/config/engine-system.properties /opt/app/policy/config policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + serverConfig xml policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=xml' policy-drools-pdp | + ls '/tmp/policy-install/config/*.xml' policy-drools-pdp | + return 0 policy-drools-pdp | + serverConfig json policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=json' policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + ls '/tmp/policy-install/config/*.json' policy-drools-pdp | -- scripts -- policy-drools-pdp | + return 0 policy-drools-pdp | + scripts pre.sh policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- scripts --' policy-drools-pdp | + set -x policy-drools-pdp | + local 'scriptExtSuffix=pre.sh' policy-drools-pdp | + ls /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + PATH=/opt/app/policy/bin:/usr/lib/jvm/default-jvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + PATH=/usr/lib/jvm/java-17-openjdk/bin:/opt/app/policy/bin:/usr/lib/jvm/default-jvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + ls /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + echo 'executing script: /tmp/policy-install/config/noop.pre.sh' policy-drools-pdp | + source /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | executing script: /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + chmod 644 /opt/app/policy/config/engine.properties /opt/app/policy/config/feature-lifecycle.properties policy-drools-pdp | -- db -- policy-drools-pdp | + db policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- db --' policy-drools-pdp | + set -x policy-drools-pdp | + '[' -z mariadb ] policy-drools-pdp | + '[' -z 3306 ] policy-drools-pdp | + echo 'Waiting for mariadb:3306 ...' policy-drools-pdp | + timeout 120 sh -c 'until nc -vz -w 20 "${SQL_HOST}" "${SQL_PORT}"; do echo -n "."; sleep 1; done' policy-drools-pdp | Waiting for mariadb:3306 ... policy-drools-pdp | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! policy-drools-pdp | + /opt/app/policy/bin/db-migrator -s ALL -o upgrade policy-drools-pdp | + '[' -z -s ] policy-drools-pdp | + shift policy-drools-pdp | + SCHEMA=ALL policy-drools-pdp | + shift policy-drools-pdp | + '[' -z -o ] policy-drools-pdp | + shift policy-drools-pdp | + OPERATION=upgrade policy-drools-pdp | -- /opt/app/policy/bin/db-migrator -s ALL -o upgrade -- policy-drools-pdp | + shift policy-drools-pdp | + '[' -z ] policy-drools-pdp | + '[' -z ALL ] policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + '[' -z mariadb ] policy-drools-pdp | + '[' -z policy_user ] policy-drools-pdp | + '[' -z policy_user ] policy-drools-pdp | + '[' -z 3306 ] policy-drools-pdp | + '[' -z ] policy-drools-pdp | + MYSQL_CMD=mysql policy-drools-pdp | + MYSQL='mysql -upolicy_user -ppolicy_user -h mariadb -P 3306' policy-drools-pdp | + mysql -upolicy_user -ppolicy_user -h mariadb -P 3306 --execute 'show databases;' policy-drools-pdp | + '[' ALL '=' ALL ] policy-drools-pdp | + SCHEMA='*' policy-drools-pdp | + ls -d '/opt/app/policy/etc/db/migration/*/' policy-drools-pdp | + SCHEMA_S= policy-drools-pdp | + '[' -z ] policy-drools-pdp | + echo 'error: no databases available' policy-drools-pdp | + exit 0 policy-drools-pdp | error: no databases available policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + policy exec policy-drools-pdp | -- /opt/app/policy/bin/policy exec -- policy-drools-pdp | + BIN_SCRIPT=bin/policy-management-controller policy-drools-pdp | + OPERATION=none policy-drools-pdp | + '[' -z exec ] policy-drools-pdp | + OPERATION=exec policy-drools-pdp | + shift policy-drools-pdp | + '[' -z ] policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + policy_exec policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- policy_exec --' policy-drools-pdp | -- policy_exec -- policy-drools-pdp | + set -x policy-drools-pdp | + cd /opt/app/policy policy-drools-pdp | + check_x_file bin/policy-management-controller policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- check_x_file --' policy-drools-pdp | -- check_x_file -- policy-drools-pdp | + set -x policy-drools-pdp | + FILE=bin/policy-management-controller policy-drools-pdp | + '[[' '!' -f bin/policy-management-controller '||' '!' -x bin/policy-management-controller ]] policy-drools-pdp | + return 0 policy-drools-pdp | + bin/policy-management-controller exec policy-drools-pdp | + _DIR=/opt/app/policy policy-drools-pdp | + _LOGS=/var/log/onap/policy/pdpd policy-drools-pdp | + '[' -z /var/log/onap/policy/pdpd ] policy-drools-pdp | + CONTROLLER=policy-management-controller policy-drools-pdp | + RETVAL=0 policy-drools-pdp | + _PIDFILE=/opt/app/policy/PID policy-drools-pdp | -- bin/policy-management-controller exec -- policy-drools-pdp | -- exec_start -- policy-drools-pdp | + exec_start policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- exec_start --' policy-drools-pdp | + set -x policy-drools-pdp | + status policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- status --' policy-drools-pdp | -- status -- policy-drools-pdp | + set -x policy-drools-pdp | + '[' -f /opt/app/policy/PID ] policy-drools-pdp | + '[' true ] policy-drools-pdp | + pidof -s java policy-drools-pdp | + _PID= policy-drools-pdp | + _STATUS='Policy Management (no pidfile) is NOT running' policy-drools-pdp | + _RUNNING=0 policy-drools-pdp | + '[' 0 '=' 1 ] policy-drools-pdp | + RETVAL=1 policy-drools-pdp | + echo 'Policy Management (no pidfile) is NOT running' policy-drools-pdp | + '[' 0 '=' 1 ] policy-drools-pdp | + preRunning policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- preRunning --' policy-drools-pdp | + set -x policy-drools-pdp | + mkdir -p /var/log/onap/policy/pdpd policy-drools-pdp | Policy Management (no pidfile) is NOT running policy-drools-pdp | -- preRunning -- policy-drools-pdp | + ls /opt/app/policy/lib/accessors-smart-2.5.0.jar /opt/app/policy/lib/angus-activation-2.0.2.jar /opt/app/policy/lib/annotations-13.0.jar /opt/app/policy/lib/ant-1.10.14.jar /opt/app/policy/lib/ant-launcher-1.10.14.jar /opt/app/policy/lib/antlr-2.7.7.jar /opt/app/policy/lib/antlr-runtime-3.5.2.jar /opt/app/policy/lib/antlr4-runtime-4.10.1.jar /opt/app/policy/lib/aopalliance-1.0.jar /opt/app/policy/lib/aopalliance-repackaged-3.0.5.jar /opt/app/policy/lib/asm-9.3.jar /opt/app/policy/lib/byte-buddy-1.14.7.jar /opt/app/policy/lib/caffeine-2.9.3.jar /opt/app/policy/lib/capabilities-3.0.0-SNAPSHOT.jar /opt/app/policy/lib/checker-qual-3.42.0.jar /opt/app/policy/lib/classgraph-4.8.165.jar /opt/app/policy/lib/classmate-1.5.1.jar /opt/app/policy/lib/common-parameters-3.0.0-SNAPSHOT.jar /opt/app/policy/lib/commons-beanutils-1.9.4.jar /opt/app/policy/lib/commons-cli-1.5.0.jar /opt/app/policy/lib/commons-codec-1.16.0.jar /opt/app/policy/lib/commons-collections4-4.4.jar /opt/app/policy/lib/commons-configuration2-2.8.0.jar /opt/app/policy/lib/commons-io-2.13.0.jar /opt/app/policy/lib/commons-jexl3-3.2.1.jar /opt/app/policy/lib/commons-lang3-3.14.0.jar /opt/app/policy/lib/commons-logging-1.2.jar /opt/app/policy/lib/commons-net-3.9.0.jar /opt/app/policy/lib/commons-text-1.10.0.jar /opt/app/policy/lib/dom4j-2.1.3.jar /opt/app/policy/lib/drools-base-8.40.1.Final.jar /opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar /opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar /opt/app/policy/lib/drools-commands-8.40.1.Final.jar /opt/app/policy/lib/drools-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-core-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar /opt/app/policy/lib/drools-ecj-8.40.1.Final.jar /opt/app/policy/lib/drools-engine-8.40.1.Final.jar /opt/app/policy/lib/drools-io-8.40.1.Final.jar /opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar /opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar /opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar /opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar /opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar /opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar /opt/app/policy/lib/drools-tms-8.40.1.Final.jar /opt/app/policy/lib/drools-util-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar /opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar /opt/app/policy/lib/ecj-3.33.0.jar /opt/app/policy/lib/error_prone_annotations-2.23.0.jar /opt/app/policy/lib/failureaccess-1.0.2.jar /opt/app/policy/lib/feature-lifecycle-3.0.0-SNAPSHOT.jar /opt/app/policy/lib/gson-2.10.1.jar /opt/app/policy/lib/gson-3.0.0-SNAPSHOT.jar /opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar /opt/app/policy/lib/guava-33.0.0-jre.jar /opt/app/policy/lib/guice-4.2.2-no_aop.jar /opt/app/policy/lib/hibernate-commons-annotations-6.0.6.Final.jar /opt/app/policy/lib/hibernate-core-6.3.2.Final.jar /opt/app/policy/lib/hibernate-core-jakarta-5.6.15.Final.jar /opt/app/policy/lib/hibernate-validator-8.0.1.Final.jar /opt/app/policy/lib/hk2-api-3.0.5.jar /opt/app/policy/lib/hk2-locator-3.0.5.jar /opt/app/policy/lib/hk2-utils-3.0.5.jar /opt/app/policy/lib/httpclient-4.5.14.jar /opt/app/policy/lib/httpcore-4.4.16.jar /opt/app/policy/lib/istack-commons-runtime-4.1.2.jar /opt/app/policy/lib/j2objc-annotations-2.8.jar /opt/app/policy/lib/jackson-annotations-2.16.1.jar /opt/app/policy/lib/jackson-core-2.16.1.jar /opt/app/policy/lib/jackson-databind-2.16.1.jar /opt/app/policy/lib/jackson-dataformat-yaml-2.16.1.jar /opt/app/policy/lib/jackson-datatype-jsr310-2.16.1.jar /opt/app/policy/lib/jackson-jakarta-rs-base-2.16.1.jar /opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.16.1.jar /opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.16.1.jar /opt/app/policy/lib/jakarta.activation-api-2.1.2.jar /opt/app/policy/lib/jakarta.annotation-api-2.1.1.jar /opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar /opt/app/policy/lib/jakarta.el-api-3.0.3.jar /opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar /opt/app/policy/lib/jakarta.inject-api-2.0.1.jar /opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar /opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar /opt/app/policy/lib/jakarta.servlet-api-6.0.0.jar /opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar /opt/app/policy/lib/jakarta.validation-api-3.0.2.jar /opt/app/policy/lib/jakarta.ws.rs-api-3.1.0.jar /opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar /opt/app/policy/lib/jandex-2.4.2.Final.jar /opt/app/policy/lib/jandex-3.1.2.jar /opt/app/policy/lib/javaparser-core-3.24.2.jar /opt/app/policy/lib/javassist-3.29.2-GA.jar /opt/app/policy/lib/javax.inject-1.jar /opt/app/policy/lib/javax.inject-2.5.0-b62.jar /opt/app/policy/lib/jaxb-core-4.0.5.jar /opt/app/policy/lib/jaxb-impl-4.0.5.jar /opt/app/policy/lib/jaxb-runtime-4.0.5.jar /opt/app/policy/lib/jaxb-xjc-4.0.5.jar /opt/app/policy/lib/jboss-logging-3.5.0.Final.jar /opt/app/policy/lib/jcl-over-slf4j-2.0.12.jar /opt/app/policy/lib/jersey-client-3.1.5.jar /opt/app/policy/lib/jersey-common-3.1.5.jar /opt/app/policy/lib/jersey-container-servlet-core-3.1.5.jar /opt/app/policy/lib/jersey-hk2-3.1.5.jar /opt/app/policy/lib/jersey-server-3.1.5.jar /opt/app/policy/lib/jetty-http-11.0.20.jar /opt/app/policy/lib/jetty-io-11.0.20.jar /opt/app/policy/lib/jetty-jakarta-servlet-api-5.0.2.jar /opt/app/policy/lib/jetty-security-11.0.20.jar /opt/app/policy/lib/jetty-server-11.0.20.jar /opt/app/policy/lib/jetty-servlet-11.0.20.jar /opt/app/policy/lib/jetty-util-11.0.20.jar /opt/app/policy/lib/jna-5.13.0.jar /opt/app/policy/lib/jna-platform-5.13.0.jar /opt/app/policy/lib/json-path-2.9.0.jar /opt/app/policy/lib/json-smart-2.5.0.jar /opt/app/policy/lib/jsr305-3.0.2.jar /opt/app/policy/lib/kafka-clients-3.6.1.jar /opt/app/policy/lib/kie-api-8.40.1.Final.jar /opt/app/policy/lib/kie-ci-8.40.1.Final.jar /opt/app/policy/lib/kie-internal-8.40.1.Final.jar /opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar /opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar /opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar /opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar /opt/app/policy/lib/kotlin-reflect-1.9.23.jar /opt/app/policy/lib/kotlin-stdlib-1.9.23.jar /opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar /opt/app/policy/lib/logback-classic-1.4.14.jar /opt/app/policy/lib/logback-core-1.4.14.jar /opt/app/policy/lib/lombok-1.18.30.jar /opt/app/policy/lib/lz4-java-1.8.0.jar /opt/app/policy/lib/mariadb-java-client-3.3.3.jar /opt/app/policy/lib/maven-artifact-3.8.6.jar /opt/app/policy/lib/maven-builder-support-3.8.6.jar /opt/app/policy/lib/maven-compat-3.8.6.jar /opt/app/policy/lib/maven-core-3.8.6.jar /opt/app/policy/lib/maven-model-3.8.6.jar /opt/app/policy/lib/maven-model-builder-3.8.6.jar /opt/app/policy/lib/maven-plugin-api-3.8.6.jar /opt/app/policy/lib/maven-repository-metadata-3.8.6.jar /opt/app/policy/lib/maven-resolver-api-1.6.3.jar /opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar /opt/app/policy/lib/maven-resolver-impl-1.6.3.jar /opt/app/policy/lib/maven-resolver-provider-3.8.6.jar /opt/app/policy/lib/maven-resolver-spi-1.6.3.jar /opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar /opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar /opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar /opt/app/policy/lib/maven-resolver-util-1.6.3.jar /opt/app/policy/lib/maven-settings-3.8.6.jar /opt/app/policy/lib/maven-settings-builder-3.8.6.jar /opt/app/policy/lib/maven-shared-utils-3.3.4.jar /opt/app/policy/lib/medeia-validator-core-1.1.1.jar /opt/app/policy/lib/medeia-validator-gson-1.1.1.jar /opt/app/policy/lib/mvel2-2.5.2.Final.jar /opt/app/policy/lib/mxparser-1.2.2.jar /opt/app/policy/lib/opentelemetry-api-1.25.0.jar /opt/app/policy/lib/opentelemetry-context-1.25.0.jar /opt/app/policy/lib/opentelemetry-instrumentation-api-1.25.0.jar /opt/app/policy/lib/opentelemetry-instrumentation-api-semconv-1.25.0-alpha.jar /opt/app/policy/lib/opentelemetry-kafka-clients-2.6-1.25.0-alpha.jar /opt/app/policy/lib/opentelemetry-kafka-clients-common-1.25.0-alpha.jar /opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar /opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar /opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar /opt/app/policy/lib/osgi-resource-locator-1.0.3.jar /opt/app/policy/lib/plexus-cipher-2.0.jar /opt/app/policy/lib/plexus-classworlds-2.6.0.jar /opt/app/policy/lib/plexus-component-annotations-2.1.0.jar /opt/app/policy/lib/plexus-interpolation-1.26.jar /opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar /opt/app/policy/lib/plexus-utils-3.5.0.jar /opt/app/policy/lib/policy-core-3.0.0-SNAPSHOT.jar /opt/app/policy/lib/policy-domains-3.0.0-SNAPSHOT.jar /opt/app/policy/lib/policy-endpoints-3.0.0-SNAPSHOT.jar /opt/app/policy/lib/policy-management-3.0.0-SNAPSHOT.jar /opt/app/policy/lib/policy-models-base-4.0.0-SNAPSHOT.jar /opt/app/policy/lib/policy-models-dao-4.0.0-SNAPSHOT.jar /opt/app/policy/lib/policy-models-errors-4.0.0-SNAPSHOT.jar /opt/app/policy/lib/policy-models-examples-4.0.0-SNAPSHOT.jar /opt/app/policy/lib/policy-models-pdp-4.0.0-SNAPSHOT.jar /opt/app/policy/lib/policy-models-tosca-4.0.0-SNAPSHOT.jar /opt/app/policy/lib/policy-utils-3.0.0-SNAPSHOT.jar /opt/app/policy/lib/postgresql-42.7.2.jar /opt/app/policy/lib/protobuf-java-3.22.0.jar /opt/app/policy/lib/re2j-1.7.jar /opt/app/policy/lib/simpleclient-0.16.0.jar /opt/app/policy/lib/simpleclient_common-0.16.0.jar /opt/app/policy/lib/simpleclient_hotspot-0.16.0.jar /opt/app/policy/lib/simpleclient_logback-0.16.0.jar /opt/app/policy/lib/simpleclient_servlet_common-0.16.0.jar /opt/app/policy/lib/simpleclient_servlet_jakarta-0.16.0.jar /opt/app/policy/lib/simpleclient_tracer_common-0.16.0.jar /opt/app/policy/lib/simpleclient_tracer_otel-0.16.0.jar /opt/app/policy/lib/simpleclient_tracer_otel_agent-0.16.0.jar /opt/app/policy/lib/slf4j-api-2.0.12.jar /opt/app/policy/lib/snakeyaml-2.2.jar /opt/app/policy/lib/snappy-java-1.1.10.5.jar /opt/app/policy/lib/swagger-annotations-2.2.20.jar /opt/app/policy/lib/swagger-annotations-jakarta-2.2.20.jar /opt/app/policy/lib/swagger-core-jakarta-2.2.20.jar /opt/app/policy/lib/swagger-integration-jakarta-2.2.20.jar /opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.20.jar /opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.20.jar /opt/app/policy/lib/swagger-models-jakarta-2.2.20.jar /opt/app/policy/lib/txw2-4.0.5.jar /opt/app/policy/lib/utils-3.0.0-SNAPSHOT.jar /opt/app/policy/lib/waffle-jna-3.3.0.jar /opt/app/policy/lib/wagon-http-3.5.1.jar /opt/app/policy/lib/wagon-http-shared-3.5.1.jar /opt/app/policy/lib/wagon-provider-api-3.5.1.jar /opt/app/policy/lib/xmlpull-1.1.3.1.jar /opt/app/policy/lib/xstream-1.4.20.jar /opt/app/policy/lib/zstd-jni-1.5.5-1.jar policy-drools-pdp | + xargs -I X printf ':%s' X policy-drools-pdp | + CP=:/opt/app/policy/lib/accessors-smart-2.5.0.jar:/opt/app/policy/lib/angus-activation-2.0.2.jar:/opt/app/policy/lib/annotations-13.0.jar:/opt/app/policy/lib/ant-1.10.14.jar:/opt/app/policy/lib/ant-launcher-1.10.14.jar:/opt/app/policy/lib/antlr-2.7.7.jar:/opt/app/policy/lib/antlr-runtime-3.5.2.jar:/opt/app/policy/lib/antlr4-runtime-4.10.1.jar:/opt/app/policy/lib/aopalliance-1.0.jar:/opt/app/policy/lib/aopalliance-repackaged-3.0.5.jar:/opt/app/policy/lib/asm-9.3.jar:/opt/app/policy/lib/byte-buddy-1.14.7.jar:/opt/app/policy/lib/caffeine-2.9.3.jar:/opt/app/policy/lib/capabilities-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/checker-qual-3.42.0.jar:/opt/app/policy/lib/classgraph-4.8.165.jar:/opt/app/policy/lib/classmate-1.5.1.jar:/opt/app/policy/lib/common-parameters-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/commons-beanutils-1.9.4.jar:/opt/app/policy/lib/commons-cli-1.5.0.jar:/opt/app/policy/lib/commons-codec-1.16.0.jar:/opt/app/policy/lib/commons-collections4-4.4.jar:/opt/app/policy/lib/commons-configuration2-2.8.0.jar:/opt/app/policy/lib/commons-io-2.13.0.jar:/opt/app/policy/lib/commons-jexl3-3.2.1.jar:/opt/app/policy/lib/commons-lang3-3.14.0.jar:/opt/app/policy/lib/commons-logging-1.2.jar:/opt/app/policy/lib/commons-net-3.9.0.jar:/opt/app/policy/lib/commons-text-1.10.0.jar:/opt/app/policy/lib/dom4j-2.1.3.jar:/opt/app/policy/lib/drools-base-8.40.1.Final.jar:/opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar:/opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar:/opt/app/policy/lib/drools-commands-8.40.1.Final.jar:/opt/app/policy/lib/drools-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-core-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-ecj-8.40.1.Final.jar:/opt/app/policy/lib/drools-engine-8.40.1.Final.jar:/opt/app/policy/lib/drools-io-8.40.1.Final.jar:/opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar:/opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar:/opt/app/policy/lib/drools-tms-8.40.1.Final.jar:/opt/app/policy/lib/drools-util-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar:/opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar:/opt/app/policy/lib/ecj-3.33.0.jar:/opt/app/policy/lib/error_prone_annotations-2.23.0.jar:/opt/app/policy/lib/failureaccess-1.0.2.jar:/opt/app/policy/lib/feature-lifecycle-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/gson-2.10.1.jar:/opt/app/policy/lib/gson-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar:/opt/app/policy/lib/guava-33.0.0-jre.jar:/opt/app/policy/lib/guice-4.2.2-no_aop.jar:/opt/app/policy/lib/hibernate-commons-annotations-6.0.6.Final.jar:/opt/app/policy/lib/hibernate-core-6.3.2.Final.jar:/opt/app/policy/lib/hibernate-core-jakarta-5.6.15.Final.jar:/opt/app/policy/lib/hibernate-validator-8.0.1.Final.jar:/opt/app/policy/lib/hk2-api-3.0.5.jar:/opt/app/policy/lib/hk2-locator-3.0.5.jar:/opt/app/policy/lib/hk2-utils-3.0.5.jar:/opt/app/policy/lib/httpclient-4.5.14.jar:/opt/app/policy/lib/httpcore-4.4.16.jar:/opt/app/policy/lib/istack-commons-runtime-4.1.2.jar:/opt/app/policy/lib/j2objc-annotations-2.8.jar:/opt/app/policy/lib/jackson-annotations-2.16.1.jar:/opt/app/policy/lib/jackson-core-2.16.1.jar:/opt/app/policy/lib/jackson-databind-2.16.1.jar:/opt/app/policy/lib/jackson-dataformat-yaml-2.16.1.jar:/opt/app/policy/lib/jackson-datatype-jsr310-2.16.1.jar:/opt/app/policy/lib/jackson-jakarta-rs-base-2.16.1.jar:/opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.16.1.jar:/opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.16.1.jar:/opt/app/policy/lib/jakarta.activation-api-2.1.2.jar:/opt/app/policy/lib/jakarta.annotation-api-2.1.1.jar:/opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar:/opt/app/policy/lib/jakarta.el-api-3.0.3.jar:/opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar:/opt/app/policy/lib/jakarta.inject-api-2.0.1.jar:/opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar:/opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar:/opt/app/policy/lib/jakarta.servlet-api-6.0.0.jar:/opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar:/opt/app/policy/lib/jakarta.validation-api-3.0.2.jar:/opt/app/policy/lib/jakarta.ws.rs-api-3.1.0.jar:/opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar:/opt/app/policy/lib/jandex-2.4.2.Final.jar:/opt/app/policy/lib/jandex-3.1.2.jar:/opt/app/policy/lib/javaparser-core-3.24.2.jar:/opt/app/policy/lib/javassist-3.29.2-GA.jar:/opt/app/policy/lib/javax.inject-1.jar:/opt/app/policy/lib/javax.inject-2.5.0-b62.jar:/opt/app/policy/lib/jaxb-core-4.0.5.jar:/opt/app/policy/lib/jaxb-impl-4.0.5.jar:/opt/app/policy/lib/jaxb-runtime-4.0.5.jar:/opt/app/policy/lib/jaxb-xjc-4.0.5.jar:/opt/app/policy/lib/jboss-logging-3.5.0.Final.jar:/opt/app/policy/lib/jcl-over-slf4j-2.0.12.jar:/opt/app/policy/lib/jersey-client-3.1.5.jar:/opt/app/policy/lib/jersey-common-3.1.5.jar:/opt/app/policy/lib/jersey-container-servlet-core-3.1.5.jar:/opt/app/policy/lib/jersey-hk2-3.1.5.jar:/opt/app/policy/lib/jersey-server-3.1.5.jar:/opt/app/policy/lib/jetty-http-11.0.20.jar:/opt/app/policy/lib/jetty-io-11.0.20.jar:/opt/app/policy/lib/jetty-jakarta-servlet-api-5.0.2.jar:/opt/app/policy/lib/jetty-security-11.0.20.jar:/opt/app/policy/lib/jetty-server-11.0.20.jar:/opt/app/policy/lib/jetty-servlet-11.0.20.jar:/opt/app/policy/lib/jetty-util-11.0.20.jar:/opt/app/policy/lib/jna-5.13.0.jar:/opt/app/policy/lib/jna-platform-5.13.0.jar:/opt/app/policy/lib/json-path-2.9.0.jar:/opt/app/policy/lib/json-smart-2.5.0.jar:/opt/app/policy/lib/jsr305-3.0.2.jar:/opt/app/policy/lib/kafka-clients-3.6.1.jar:/opt/app/policy/lib/kie-api-8.40.1.Final.jar:/opt/app/policy/lib/kie-ci-8.40.1.Final.jar:/opt/app/policy/lib/kie-internal-8.40.1.Final.jar:/opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar:/opt/app/policy/lib/kotlin-reflect-1.9.23.jar:/opt/app/policy/lib/kotlin-stdlib-1.9.23.jar:/opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/app/policy/lib/logback-classic-1.4.14.jar:/opt/app/policy/lib/logback-core-1.4.14.jar:/opt/app/policy/lib/lombok-1.18.30.jar:/opt/app/policy/lib/lz4-java-1.8.0.jar:/opt/app/policy/lib/mariadb-java-client-3.3.3.jar:/opt/app/policy/lib/maven-artifact-3.8.6.jar:/opt/app/policy/lib/maven-builder-support-3.8.6.jar:/opt/app/policy/lib/maven-compat-3.8.6.jar:/opt/app/policy/lib/maven-core-3.8.6.jar:/opt/app/policy/lib/maven-model-3.8.6.jar:/opt/app/policy/lib/maven-model-builder-3.8.6.jar:/opt/app/policy/lib/maven-plugin-api-3.8.6.jar:/opt/app/policy/lib/maven-repository-metadata-3.8.6.jar:/opt/app/policy/lib/maven-resolver-api-1.6.3.jar:/opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar:/opt/app/policy/lib/maven-resolver-impl-1.6.3.jar:/opt/app/policy/lib/maven-resolver-provider-3.8.6.jar:/opt/app/policy/lib/maven-resolver-spi-1.6.3.jar:/opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar:/opt/app/policy/lib/maven-resolver-util-1.6.3.jar:/opt/app/policy/lib/maven-settings-3.8.6.jar:/opt/app/policy/lib/maven-settings-builder-3.8.6.jar:/opt/app/policy/lib/maven-shared-utils-3.3.4.jar:/opt/app/policy/lib/medeia-validator-core-1.1.1.jar:/opt/app/policy/lib/medeia-validator-gson-1.1.1.jar:/opt/app/policy/lib/mvel2-2.5.2.Final.jar:/opt/app/policy/lib/mxparser-1.2.2.jar:/opt/app/policy/lib/opentelemetry-api-1.25.0.jar:/opt/app/policy/lib/opentelemetry-context-1.25.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-1.25.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-2.6-1.25.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-common-1.25.0-alpha.jar:/opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar:/opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar:/opt/app/policy/lib/osgi-resource-locator-1.0.3.jar:/opt/app/policy/lib/plexus-cipher-2.0.jar:/opt/app/policy/lib/plexus-classworlds-2.6.0.jar:/opt/app/policy/lib/plexus-component-annotations-2.1.0.jar:/opt/app/policy/lib/plexus-interpolation-1.26.jar:/opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar:/opt/app/policy/lib/plexus-utils-3.5.0.jar:/opt/app/policy/lib/policy-core-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-domains-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-endpoints-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-management-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-base-4.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-dao-4.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-errors-4.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-examples-4.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-pdp-4.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-tosca-4.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-utils-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/postgresql-42.7.2.jar:/opt/app/policy/lib/protobuf-java-3.22.0.jar:/opt/app/policy/lib/re2j-1.7.jar:/opt/app/policy/lib/simpleclient-0.16.0.jar:/opt/app/policy/lib/simpleclient_common-0.16.0.jar:/opt/app/policy/lib/simpleclient_hotspot-0.16.0.jar:/opt/app/policy/lib/simpleclient_logback-0.16.0.jar:/opt/app/policy/lib/simpleclient_servlet_common-0.16.0.jar:/opt/app/policy/lib/simpleclient_servlet_jakarta-0.16.0.jar:/opt/app/policy/lib/simpleclient_tracer_common-0.16.0.jar:/opt/app/policy/lib/simpleclient_tracer_otel-0.16.0.jar:/opt/app/policy/lib/simpleclient_tracer_otel_agent-0.16.0.jar:/opt/app/policy/lib/slf4j-api-2.0.12.jar:/opt/app/policy/lib/snakeyaml-2.2.jar:/opt/app/policy/lib/snappy-java-1.1.10.5.jar:/opt/app/policy/lib/swagger-annotations-2.2.20.jar:/opt/app/policy/lib/swagger-annotations-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-core-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-integration-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-models-jakarta-2.2.20.jar:/opt/app/policy/lib/txw2-4.0.5.jar:/opt/app/policy/lib/utils-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/waffle-jna-3.3.0.jar:/opt/app/policy/lib/wagon-http-3.5.1.jar:/opt/app/policy/lib/wagon-http-shared-3.5.1.jar:/opt/app/policy/lib/wagon-provider-api-3.5.1.jar:/opt/app/policy/lib/xmlpull-1.1.3.1.jar:/opt/app/policy/lib/xstream-1.4.20.jar:/opt/app/policy/lib/zstd-jni-1.5.5-1.jar policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + /opt/app/policy/bin/configure-maven policy-drools-pdp | + HOME_M2=/home/policy/.m2 policy-drools-pdp | + mkdir -p /home/policy/.m2 policy-drools-pdp | + '[' -z http://nexus:8081/nexus/content/repositories/snapshots/ ] policy-drools-pdp | + ln -s -f /opt/app/policy/etc/m2/settings.xml /home/policy/.m2/settings.xml policy-drools-pdp | + '[' -f /opt/app/policy/config/system.properties ] policy-drools-pdp | + sed -n -e 's/^[ \t]*\([^ \t#]*\)[ \t]*=[ \t]*\(.*\)$/-D\1=\2/p' /opt/app/policy/config/system.properties policy-drools-pdp | + systemProperties='-Dlogback.configurationFile=config/logback.xml' policy-drools-pdp | + cd /opt/app/policy policy-drools-pdp | + exec /usr/lib/jvm/java-17-openjdk/bin/java -server -Xms512m -Xmx512m -cp /opt/app/policy/config:/opt/app/policy/lib::/opt/app/policy/lib/accessors-smart-2.5.0.jar:/opt/app/policy/lib/angus-activation-2.0.2.jar:/opt/app/policy/lib/annotations-13.0.jar:/opt/app/policy/lib/ant-1.10.14.jar:/opt/app/policy/lib/ant-launcher-1.10.14.jar:/opt/app/policy/lib/antlr-2.7.7.jar:/opt/app/policy/lib/antlr-runtime-3.5.2.jar:/opt/app/policy/lib/antlr4-runtime-4.10.1.jar:/opt/app/policy/lib/aopalliance-1.0.jar:/opt/app/policy/lib/aopalliance-repackaged-3.0.5.jar:/opt/app/policy/lib/asm-9.3.jar:/opt/app/policy/lib/byte-buddy-1.14.7.jar:/opt/app/policy/lib/caffeine-2.9.3.jar:/opt/app/policy/lib/capabilities-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/checker-qual-3.42.0.jar:/opt/app/policy/lib/classgraph-4.8.165.jar:/opt/app/policy/lib/classmate-1.5.1.jar:/opt/app/policy/lib/common-parameters-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/commons-beanutils-1.9.4.jar:/opt/app/policy/lib/commons-cli-1.5.0.jar:/opt/app/policy/lib/commons-codec-1.16.0.jar:/opt/app/policy/lib/commons-collections4-4.4.jar:/opt/app/policy/lib/commons-configuration2-2.8.0.jar:/opt/app/policy/lib/commons-io-2.13.0.jar:/opt/app/policy/lib/commons-jexl3-3.2.1.jar:/opt/app/policy/lib/commons-lang3-3.14.0.jar:/opt/app/policy/lib/commons-logging-1.2.jar:/opt/app/policy/lib/commons-net-3.9.0.jar:/opt/app/policy/lib/commons-text-1.10.0.jar:/opt/app/policy/lib/dom4j-2.1.3.jar:/opt/app/policy/lib/drools-base-8.40.1.Final.jar:/opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar:/opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar:/opt/app/policy/lib/drools-commands-8.40.1.Final.jar:/opt/app/policy/lib/drools-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-core-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-ecj-8.40.1.Final.jar:/opt/app/policy/lib/drools-engine-8.40.1.Final.jar:/opt/app/policy/lib/drools-io-8.40.1.Final.jar:/opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar:/opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar:/opt/app/policy/lib/drools-tms-8.40.1.Final.jar:/opt/app/policy/lib/drools-util-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar:/opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar:/opt/app/policy/lib/ecj-3.33.0.jar:/opt/app/policy/lib/error_prone_annotations-2.23.0.jar:/opt/app/policy/lib/failureaccess-1.0.2.jar:/opt/app/policy/lib/feature-lifecycle-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/gson-2.10.1.jar:/opt/app/policy/lib/gson-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar:/opt/app/policy/lib/guava-33.0.0-jre.jar:/opt/app/policy/lib/guice-4.2.2-no_aop.jar:/opt/app/policy/lib/hibernate-commons-annotations-6.0.6.Final.jar:/opt/app/policy/lib/hibernate-core-6.3.2.Final.jar:/opt/app/policy/lib/hibernate-core-jakarta-5.6.15.Final.jar:/opt/app/policy/lib/hibernate-validator-8.0.1.Final.jar:/opt/app/policy/lib/hk2-api-3.0.5.jar:/opt/app/policy/lib/hk2-locator-3.0.5.jar:/opt/app/policy/lib/hk2-utils-3.0.5.jar:/opt/app/policy/lib/httpclient-4.5.14.jar:/opt/app/policy/lib/httpcore-4.4.16.jar:/opt/app/policy/lib/istack-commons-runtime-4.1.2.jar:/opt/app/policy/lib/j2objc-annotations-2.8.jar:/opt/app/policy/lib/jackson-annotations-2.16.1.jar:/opt/app/policy/lib/jackson-core-2.16.1.jar:/opt/app/policy/lib/jackson-databind-2.16.1.jar:/opt/app/policy/lib/jackson-dataformat-yaml-2.16.1.jar:/opt/app/policy/lib/jackson-datatype-jsr310-2.16.1.jar:/opt/app/policy/lib/jackson-jakarta-rs-base-2.16.1.jar:/opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.16.1.jar:/opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.16.1.jar:/opt/app/policy/lib/jakarta.activation-api-2.1.2.jar:/opt/app/policy/lib/jakarta.annotation-api-2.1.1.jar:/opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar:/opt/app/policy/lib/jakarta.el-api-3.0.3.jar:/opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar:/opt/app/policy/lib/jakarta.inject-api-2.0.1.jar:/opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar:/opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar:/opt/app/policy/lib/jakarta.servlet-api-6.0.0.jar:/opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar:/opt/app/policy/lib/jakarta.validation-api-3.0.2.jar:/opt/app/policy/lib/jakarta.ws.rs-api-3.1.0.jar:/opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar:/opt/app/policy/lib/jandex-2.4.2.Final.jar:/opt/app/policy/lib/jandex-3.1.2.jar:/opt/app/policy/lib/javaparser-core-3.24.2.jar:/opt/app/policy/lib/javassist-3.29.2-GA.jar:/opt/app/policy/lib/javax.inject-1.jar:/opt/app/policy/lib/javax.inject-2.5.0-b62.jar:/opt/app/policy/lib/jaxb-core-4.0.5.jar:/opt/app/policy/lib/jaxb-impl-4.0.5.jar:/opt/app/policy/lib/jaxb-runtime-4.0.5.jar:/opt/app/policy/lib/jaxb-xjc-4.0.5.jar:/opt/app/policy/lib/jboss-logging-3.5.0.Final.jar:/opt/app/policy/lib/jcl-over-slf4j-2.0.12.jar:/opt/app/policy/lib/jersey-client-3.1.5.jar:/opt/app/policy/lib/jersey-common-3.1.5.jar:/opt/app/policy/lib/jersey-container-servlet-core-3.1.5.jar:/opt/app/policy/lib/jersey-hk2-3.1.5.jar:/opt/app/policy/lib/jersey-server-3.1.5.jar:/opt/app/policy/lib/jetty-http-11.0.20.jar:/opt/app/policy/lib/jetty-io-11.0.20.jar:/opt/app/policy/lib/jetty-jakarta-servlet-api-5.0.2.jar:/opt/app/policy/lib/jetty-security-11.0.20.jar:/opt/app/policy/lib/jetty-server-11.0.20.jar:/opt/app/policy/lib/jetty-servlet-11.0.20.jar:/opt/app/policy/lib/jetty-util-11.0.20.jar:/opt/app/policy/lib/jna-5.13.0.jar:/opt/app/policy/lib/jna-platform-5.13.0.jar:/opt/app/policy/lib/json-path-2.9.0.jar:/opt/app/policy/lib/json-smart-2.5.0.jar:/opt/app/policy/lib/jsr305-3.0.2.jar:/opt/app/policy/lib/kafka-clients-3.6.1.jar:/opt/app/policy/lib/kie-api-8.40.1.Final.jar:/opt/app/policy/lib/kie-ci-8.40.1.Final.jar:/opt/app/policy/lib/kie-internal-8.40.1.Final.jar:/opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar:/opt/app/policy/lib/kotlin-reflect-1.9.23.jar:/opt/app/policy/lib/kotlin-stdlib-1.9.23.jar:/opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/app/policy/lib/logback-classic-1.4.14.jar:/opt/app/policy/lib/logback-core-1.4.14.jar:/opt/app/policy/lib/lombok-1.18.30.jar:/opt/app/policy/lib/lz4-java-1.8.0.jar:/opt/app/policy/lib/mariadb-java-client-3.3.3.jar:/opt/app/policy/lib/maven-artifact-3.8.6.jar:/opt/app/policy/lib/maven-builder-support-3.8.6.jar:/opt/app/policy/lib/maven-compat-3.8.6.jar:/opt/app/policy/lib/maven-core-3.8.6.jar:/opt/app/policy/lib/maven-model-3.8.6.jar:/opt/app/policy/lib/maven-model-builder-3.8.6.jar:/opt/app/policy/lib/maven-plugin-api-3.8.6.jar:/opt/app/policy/lib/maven-repository-metadata-3.8.6.jar:/opt/app/policy/lib/maven-resolver-api-1.6.3.jar:/opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar:/opt/app/policy/lib/maven-resolver-impl-1.6.3.jar:/opt/app/policy/lib/maven-resolver-provider-3.8.6.jar:/opt/app/policy/lib/maven-resolver-spi-1.6.3.jar:/opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar:/opt/app/policy/lib/maven-resolver-util-1.6.3.jar:/opt/app/policy/lib/maven-settings-3.8.6.jar:/opt/app/policy/lib/maven-settings-builder-3.8.6.jar:/opt/app/policy/lib/maven-shared-utils-3.3.4.jar:/opt/app/policy/lib/medeia-validator-core-1.1.1.jar:/opt/app/policy/lib/medeia-validator-gson-1.1.1.jar:/opt/app/policy/lib/mvel2-2.5.2.Final.jar:/opt/app/policy/lib/mxparser-1.2.2.jar:/opt/app/policy/lib/opentelemetry-api-1.25.0.jar:/opt/app/policy/lib/opentelemetry-context-1.25.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-1.25.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-2.6-1.25.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-common-1.25.0-alpha.jar:/opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar:/opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar:/opt/app/policy/lib/osgi-resource-locator-1.0.3.jar:/opt/app/policy/lib/plexus-cipher-2.0.jar:/opt/app/policy/lib/plexus-classworlds-2.6.0.jar:/opt/app/policy/lib/plexus-component-annotations-2.1.0.jar:/opt/app/policy/lib/plexus-interpolation-1.26.jar:/opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar:/opt/app/policy/lib/plexus-utils-3.5.0.jar:/opt/app/policy/lib/policy-core-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-domains-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-endpoints-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-management-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-base-4.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-dao-4.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-errors-4.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-examples-4.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-pdp-4.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-tosca-4.0.0-SNAPSHOT.jar:/opt/app/policy/lib/policy-utils-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/postgresql-42.7.2.jar:/opt/app/policy/lib/protobuf-java-3.22.0.jar:/opt/app/policy/lib/re2j-1.7.jar:/opt/app/policy/lib/simpleclient-0.16.0.jar:/opt/app/policy/lib/simpleclient_common-0.16.0.jar:/opt/app/policy/lib/simpleclient_hotspot-0.16.0.jar:/opt/app/policy/lib/simpleclient_logback-0.16.0.jar:/opt/app/policy/lib/simpleclient_servlet_common-0.16.0.jar:/opt/app/policy/lib/simpleclient_servlet_jakarta-0.16.0.jar:/opt/app/policy/lib/simpleclient_tracer_common-0.16.0.jar:/opt/app/policy/lib/simpleclient_tracer_otel-0.16.0.jar:/opt/app/policy/lib/simpleclient_tracer_otel_agent-0.16.0.jar:/opt/app/policy/lib/slf4j-api-2.0.12.jar:/opt/app/policy/lib/snakeyaml-2.2.jar:/opt/app/policy/lib/snappy-java-1.1.10.5.jar:/opt/app/policy/lib/swagger-annotations-2.2.20.jar:/opt/app/policy/lib/swagger-annotations-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-core-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-integration-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-models-jakarta-2.2.20.jar:/opt/app/policy/lib/txw2-4.0.5.jar:/opt/app/policy/lib/utils-3.0.0-SNAPSHOT.jar:/opt/app/policy/lib/waffle-jna-3.3.0.jar:/opt/app/policy/lib/wagon-http-3.5.1.jar:/opt/app/policy/lib/wagon-http-shared-3.5.1.jar:/opt/app/policy/lib/wagon-provider-api-3.5.1.jar:/opt/app/policy/lib/xmlpull-1.1.3.1.jar:/opt/app/policy/lib/xstream-1.4.20.jar:/opt/app/policy/lib/zstd-jni-1.5.5-1.jar '-Dlogback.configurationFile=config/logback.xml' org.onap.policy.drools.system.Main policy-drools-pdp | Jul 04, 2024 1:22:39 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.SwitchesApi cannot be instantiated and will be ignored. policy-drools-pdp | Jul 04, 2024 1:22:39 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.SwaggerApi cannot be instantiated and will be ignored. policy-drools-pdp | Jul 04, 2024 1:22:39 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.DefaultApi cannot be instantiated and will be ignored. policy-drools-pdp | Jul 04, 2024 1:22:39 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.EnvironmentApi cannot be instantiated and will be ignored. policy-drools-pdp | Jul 04, 2024 1:22:39 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.LifecycleApi cannot be instantiated and will be ignored. policy-drools-pdp | Jul 04, 2024 1:22:39 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.FeaturesApi cannot be instantiated and will be ignored. policy-drools-pdp | Jul 04, 2024 1:22:39 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.InputsApi cannot be instantiated and will be ignored. policy-drools-pdp | Jul 04, 2024 1:22:39 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.PropertiesApi cannot be instantiated and will be ignored. policy-drools-pdp | Jul 04, 2024 1:22:39 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.LegacyApi cannot be instantiated and will be ignored. policy-drools-pdp | Jul 04, 2024 1:22:39 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.TopicsApi cannot be instantiated and will be ignored. policy-drools-pdp | Jul 04, 2024 1:22:39 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.ControllersApi cannot be instantiated and will be ignored. policy-drools-pdp | Jul 04, 2024 1:22:39 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.ToolsApi cannot be instantiated and will be ignored. =================================== ======== Logs from pap ======== policy-pap | Waiting for mariadb port 3306... policy-pap | mariadb (172.17.0.3:3306) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.6:9092) open policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.5:6969) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | :: Spring Boot :: (v3.1.10) policy-pap | policy-pap | [2024-07-04T13:22:54.764+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-pap | [2024-07-04T13:22:54.838+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 35 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2024-07-04T13:22:54.839+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-pap | [2024-07-04T13:22:56.799+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2024-07-04T13:22:56.894+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 86 ms. Found 7 JPA repository interfaces. policy-pap | [2024-07-04T13:22:57.342+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-07-04T13:22:57.342+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-07-04T13:22:57.900+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-pap | [2024-07-04T13:22:57.910+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2024-07-04T13:22:57.912+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2024-07-04T13:22:57.912+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-pap | [2024-07-04T13:22:58.006+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2024-07-04T13:22:58.006+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3087 ms policy-pap | [2024-07-04T13:22:58.392+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2024-07-04T13:22:58.440+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final policy-pap | [2024-07-04T13:22:58.770+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2024-07-04T13:22:58.857+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@51288417 policy-pap | [2024-07-04T13:22:58.859+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2024-07-04T13:22:58.889+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect policy-pap | [2024-07-04T13:23:00.498+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] policy-pap | [2024-07-04T13:23:00.511+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2024-07-04T13:23:01.031+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-pap | [2024-07-04T13:23:01.406+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-pap | [2024-07-04T13:23:01.513+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository policy-pap | [2024-07-04T13:23:01.794+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-934db71c-a64d-4f52-8a46-845600f629f9-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 934db71c-a64d-4f52-8a46-845600f629f9 policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-07-04T13:23:01.985+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-07-04T13:23:01.986+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-07-04T13:23:01.986+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720099381984 policy-pap | [2024-07-04T13:23:01.988+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-934db71c-a64d-4f52-8a46-845600f629f9-1, groupId=934db71c-a64d-4f52-8a46-845600f629f9] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-07-04T13:23:01.989+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-07-04T13:23:01.995+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-07-04T13:23:01.995+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-07-04T13:23:01.995+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720099381995 policy-pap | [2024-07-04T13:23:01.995+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-07-04T13:23:02.314+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=drools, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Drools 1.0.0, onap.policies.native.drools.Controller 1.0.0, onap.policies.native.drools.Artifact 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2024-07-04T13:23:02.467+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2024-07-04T13:23:02.781+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@5ccc971e, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@28269c65, org.springframework.security.web.context.SecurityContextHolderFilter@59ac77f9, org.springframework.security.web.header.HeaderWriterFilter@3ba1f56e, org.springframework.security.web.authentication.logout.LogoutFilter@27a6384b, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3f2ef402, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@76134251, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@7bde704a, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@4c9d833, org.springframework.security.web.access.ExceptionTranslationFilter@36ab69d9, org.springframework.security.web.access.intercept.AuthorizationFilter@20518250] policy-pap | [2024-07-04T13:23:03.779+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-pap | [2024-07-04T13:23:03.878+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2024-07-04T13:23:03.914+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-pap | [2024-07-04T13:23:03.936+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2024-07-04T13:23:03.936+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2024-07-04T13:23:03.938+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2024-07-04T13:23:03.939+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2024-07-04T13:23:03.939+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2024-07-04T13:23:03.939+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2024-07-04T13:23:03.939+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2024-07-04T13:23:03.942+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=934db71c-a64d-4f52-8a46-845600f629f9, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@138b9abe policy-pap | [2024-07-04T13:23:03.956+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=934db71c-a64d-4f52-8a46-845600f629f9, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-07-04T13:23:03.956+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-934db71c-a64d-4f52-8a46-845600f629f9-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 934db71c-a64d-4f52-8a46-845600f629f9 policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-07-04T13:23:03.963+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-07-04T13:23:03.963+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-07-04T13:23:03.963+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720099383963 policy-pap | [2024-07-04T13:23:03.963+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-934db71c-a64d-4f52-8a46-845600f629f9-3, groupId=934db71c-a64d-4f52-8a46-845600f629f9] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-07-04T13:23:03.963+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2024-07-04T13:23:03.963+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=7fd02203-adf5-4313-9172-c06cfb3c6c3e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2987b699 policy-pap | [2024-07-04T13:23:03.963+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=7fd02203-adf5-4313-9172-c06cfb3c6c3e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-07-04T13:23:03.964+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-07-04T13:23:03.968+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-07-04T13:23:03.968+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-07-04T13:23:03.968+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720099383968 policy-pap | [2024-07-04T13:23:03.968+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-07-04T13:23:03.972+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2024-07-04T13:23:03.972+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=7fd02203-adf5-4313-9172-c06cfb3c6c3e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-07-04T13:23:03.972+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=934db71c-a64d-4f52-8a46-845600f629f9, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-07-04T13:23:03.972+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0adfa39c-c459-463f-9ea8-66e9a75c4464, alive=false, publisher=null]]: starting policy-pap | [2024-07-04T13:23:03.989+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-07-04T13:23:03.999+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2024-07-04T13:23:04.025+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-07-04T13:23:04.025+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-07-04T13:23:04.025+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720099384025 policy-pap | [2024-07-04T13:23:04.026+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0adfa39c-c459-463f-9ea8-66e9a75c4464, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-07-04T13:23:04.026+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=94fe4028-0fd4-4b91-a5fb-cf15ceba26a9, alive=false, publisher=null]]: starting policy-pap | [2024-07-04T13:23:04.026+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-07-04T13:23:04.027+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2024-07-04T13:23:04.030+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-07-04T13:23:04.030+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-07-04T13:23:04.030+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720099384030 policy-pap | [2024-07-04T13:23:04.031+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=94fe4028-0fd4-4b91-a5fb-cf15ceba26a9, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-07-04T13:23:04.031+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2024-07-04T13:23:04.031+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2024-07-04T13:23:04.033+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2024-07-04T13:23:04.034+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2024-07-04T13:23:04.039+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2024-07-04T13:23:04.041+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2024-07-04T13:23:04.041+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2024-07-04T13:23:04.042+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2024-07-04T13:23:04.042+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2024-07-04T13:23:04.043+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2024-07-04T13:23:04.044+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.023 seconds (process running for 10.624) policy-pap | [2024-07-04T13:23:04.051+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2024-07-04T13:23:04.477+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: s0RFBie2SzS4BKGvbus8AQ policy-pap | [2024-07-04T13:23:04.477+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-pap | [2024-07-04T13:23:04.478+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: s0RFBie2SzS4BKGvbus8AQ policy-pap | [2024-07-04T13:23:04.478+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 policy-pap | [2024-07-04T13:23:04.478+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-934db71c-a64d-4f52-8a46-845600f629f9-3, groupId=934db71c-a64d-4f52-8a46-845600f629f9] Cluster ID: s0RFBie2SzS4BKGvbus8AQ policy-pap | [2024-07-04T13:23:04.479+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-934db71c-a64d-4f52-8a46-845600f629f9-3, groupId=934db71c-a64d-4f52-8a46-845600f629f9] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-07-04T13:23:04.479+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: s0RFBie2SzS4BKGvbus8AQ policy-pap | [2024-07-04T13:23:04.480+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-07-04T13:23:04.486+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-934db71c-a64d-4f52-8a46-845600f629f9-3, groupId=934db71c-a64d-4f52-8a46-845600f629f9] (Re-)joining group policy-pap | [2024-07-04T13:23:04.486+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-07-04T13:23:04.501+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-934db71c-a64d-4f52-8a46-845600f629f9-3, groupId=934db71c-a64d-4f52-8a46-845600f629f9] Request joining group due to: need to re-join with the given member-id: consumer-934db71c-a64d-4f52-8a46-845600f629f9-3-0fc77827-2f5e-4a80-a6e0-2f4d6e786b7b policy-pap | [2024-07-04T13:23:04.501+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-934db71c-a64d-4f52-8a46-845600f629f9-3, groupId=934db71c-a64d-4f52-8a46-845600f629f9] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-07-04T13:23:04.501+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-934db71c-a64d-4f52-8a46-845600f629f9-3, groupId=934db71c-a64d-4f52-8a46-845600f629f9] (Re-)joining group policy-pap | [2024-07-04T13:23:04.502+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-d7828d8d-8e7b-411c-8926-cc233d84c065 policy-pap | [2024-07-04T13:23:04.502+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-07-04T13:23:04.502+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-07-04T13:23:07.516+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-d7828d8d-8e7b-411c-8926-cc233d84c065', protocol='range'} policy-pap | [2024-07-04T13:23:07.516+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-934db71c-a64d-4f52-8a46-845600f629f9-3, groupId=934db71c-a64d-4f52-8a46-845600f629f9] Successfully joined group with generation Generation{generationId=1, memberId='consumer-934db71c-a64d-4f52-8a46-845600f629f9-3-0fc77827-2f5e-4a80-a6e0-2f4d6e786b7b', protocol='range'} policy-pap | [2024-07-04T13:23:07.543+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-934db71c-a64d-4f52-8a46-845600f629f9-3, groupId=934db71c-a64d-4f52-8a46-845600f629f9] Finished assignment for group at generation 1: {consumer-934db71c-a64d-4f52-8a46-845600f629f9-3-0fc77827-2f5e-4a80-a6e0-2f4d6e786b7b=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-07-04T13:23:07.543+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-d7828d8d-8e7b-411c-8926-cc233d84c065=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-07-04T13:23:07.551+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-d7828d8d-8e7b-411c-8926-cc233d84c065', protocol='range'} policy-pap | [2024-07-04T13:23:07.552+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-07-04T13:23:07.553+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-934db71c-a64d-4f52-8a46-845600f629f9-3, groupId=934db71c-a64d-4f52-8a46-845600f629f9] Successfully synced group in generation Generation{generationId=1, memberId='consumer-934db71c-a64d-4f52-8a46-845600f629f9-3-0fc77827-2f5e-4a80-a6e0-2f4d6e786b7b', protocol='range'} policy-pap | [2024-07-04T13:23:07.554+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-934db71c-a64d-4f52-8a46-845600f629f9-3, groupId=934db71c-a64d-4f52-8a46-845600f629f9] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-07-04T13:23:07.555+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-07-04T13:23:07.555+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-934db71c-a64d-4f52-8a46-845600f629f9-3, groupId=934db71c-a64d-4f52-8a46-845600f629f9] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-07-04T13:23:07.564+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-07-04T13:23:07.564+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-934db71c-a64d-4f52-8a46-845600f629f9-3, groupId=934db71c-a64d-4f52-8a46-845600f629f9] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-07-04T13:23:07.591+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-07-04T13:23:07.592+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-934db71c-a64d-4f52-8a46-845600f629f9-3, groupId=934db71c-a64d-4f52-8a46-845600f629f9] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. =================================== ======== Logs from zookeeper ======== zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2024-07-04 13:22:31,314] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-04 13:22:31,322] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-04 13:22:31,322] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-04 13:22:31,322] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-04 13:22:31,322] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-04 13:22:31,324] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-07-04 13:22:31,324] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-07-04 13:22:31,324] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-07-04 13:22:31,324] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2024-07-04 13:22:31,325] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2024-07-04 13:22:31,326] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-04 13:22:31,326] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-04 13:22:31,326] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-04 13:22:31,326] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-04 13:22:31,326] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-04 13:22:31,326] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2024-07-04 13:22:31,338] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@77eca502 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2024-07-04 13:22:31,340] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2024-07-04 13:22:31,340] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2024-07-04 13:22:31,343] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-07-04 13:22:31,352] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,352] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,352] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,352] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,352] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,352] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,353] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,353] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,353] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,353] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,354] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,355] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,355] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,355] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,355] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,355] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,355] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,355] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,356] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2024-07-04 13:22:31,357] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,357] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,358] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2024-07-04 13:22:31,358] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2024-07-04 13:22:31,359] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-07-04 13:22:31,359] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-07-04 13:22:31,359] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-07-04 13:22:31,359] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-07-04 13:22:31,359] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-07-04 13:22:31,359] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-07-04 13:22:31,361] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,361] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,362] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2024-07-04 13:22:31,362] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2024-07-04 13:22:31,362] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,381] INFO Logging initialized @578ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2024-07-04 13:22:31,464] WARN o.e.j.s.ServletContextHandler@6d5620ce{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-07-04 13:22:31,465] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-07-04 13:22:31,483] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) zookeeper | [2024-07-04 13:22:31,508] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2024-07-04 13:22:31,508] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2024-07-04 13:22:31,509] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper | [2024-07-04 13:22:31,512] WARN ServletContext@o.e.j.s.ServletContextHandler@6d5620ce{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2024-07-04 13:22:31,520] INFO Started o.e.j.s.ServletContextHandler@6d5620ce{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-07-04 13:22:31,534] INFO Started ServerConnector@4d1bf319{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2024-07-04 13:22:31,534] INFO Started @731ms (org.eclipse.jetty.server.Server) zookeeper | [2024-07-04 13:22:31,534] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2024-07-04 13:22:31,538] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2024-07-04 13:22:31,539] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2024-07-04 13:22:31,540] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2024-07-04 13:22:31,541] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2024-07-04 13:22:31,560] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2024-07-04 13:22:31,560] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2024-07-04 13:22:31,562] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-07-04 13:22:31,562] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-07-04 13:22:31,568] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2024-07-04 13:22:31,568] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-07-04 13:22:31,571] INFO Snapshot loaded in 9 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-07-04 13:22:31,572] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-07-04 13:22:31,572] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-04 13:22:31,581] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2024-07-04 13:22:31,582] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2024-07-04 13:22:31,596] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2024-07-04 13:22:31,596] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2024-07-04 13:22:34,444] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) =================================== Tearing down containers... time="2024-07-04T13:23:39Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." Container policy-drools-pdp Stopping Container policy-csit Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container policy-drools-pdp Stopped Container policy-drools-pdp Removing Container policy-drools-pdp Removed Container policy-pap Stopping Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container policy-api Stopping Container kafka Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container mariadb Stopping Container mariadb Stopped Container mariadb Removing Container mariadb Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2163 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins4654689824108366930.sh ---> sysstat.sh [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins15346149649438775685.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp ']' + mkdir -p /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp/archives/ [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins11013682731718468334.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-D5kL from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-D5kL/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins7913370125465871929.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp@tmp/config13760902119018296473tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins9474465113274168039.sh ---> create-netrc.sh [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins10240560979467294856.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-D5kL from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-D5kL/bin to PATH [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins12621759750758617518.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins10852444122620259084.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-D5kL from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-D5kL/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash -l /tmp/jenkins14870305654598834550.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-D5kL from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-D5kL/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-drools-pdp-master-project-csit-verify-drools-pdp/552 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-21362 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 892 25566 0 5708 30819 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:da:99:ae brd ff:ff:ff:ff:ff:ff inet 10.30.107.62/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 86108sec preferred_lft 86108sec inet6 fe80::f816:3eff:feda:99ae/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:7d:cc:f9:bc brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:7dff:fecc:f9bc/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21362) 07/04/24 _x86_64_ (8 CPU) 13:19:57 LINUX RESTART (8 CPU) 13:20:01 tps rtps wtps bread/s bwrtn/s 13:21:02 326.23 73.99 252.24 5334.84 59918.68 13:22:01 230.44 32.86 197.58 3288.86 69087.30 13:23:01 356.86 11.95 344.91 771.20 117989.00 13:24:01 150.22 0.58 149.64 71.85 37300.80 Average: 266.08 29.83 236.25 2363.03 71081.84 13:20:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 13:21:02 30310256 31667852 2628964 7.98 50648 1628544 1422808 4.19 874144 1485456 62552 13:22:01 26802056 31628556 6137164 18.63 115308 4900920 1671420 4.92 1016152 4656544 2054632 13:23:01 24522648 29721976 8416572 25.55 130936 5226836 8108464 23.86 3122448 4752900 1096 13:24:01 25393164 30800668 7546056 22.91 160004 5371476 5001044 14.71 2103288 4872524 532 Average: 26757031 30954763 6182189 18.77 114224 4281944 4050934 11.92 1779008 3941856 529703 13:20:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 13:21:02 lo 1.73 1.73 0.18 0.18 0.00 0.00 0.00 0.00 13:21:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:21:02 ens3 257.24 212.85 1030.48 71.74 0.00 0.00 0.00 0.00 13:22:01 lo 11.38 11.38 1.12 1.12 0.00 0.00 0.00 0.00 13:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:22:01 ens3 1032.29 542.08 27922.49 50.96 0.00 0.00 0.00 0.00 13:23:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:23:01 veth436196f 8.15 7.80 1.12 1.12 0.00 0.00 0.00 0.00 13:23:01 vetha9fab92 2.85 2.82 0.34 0.30 0.00 0.00 0.00 0.00 13:23:01 veth06c4d9f 4.88 5.92 0.78 0.87 0.00 0.00 0.00 0.00 13:24:01 lo 3.57 3.57 0.33 0.33 0.00 0.00 0.00 0.00 13:24:01 veth436196f 13.06 9.52 1.07 1.29 0.00 0.00 0.00 0.00 13:24:01 veth06c4d9f 0.23 0.67 0.02 0.05 0.00 0.00 0.00 0.00 13:24:01 veth11195e4 52.86 64.19 19.20 15.28 0.00 0.00 0.00 0.00 Average: lo 4.14 4.14 0.41 0.41 0.00 0.00 0.00 0.00 Average: veth436196f 5.32 4.35 0.55 0.60 0.00 0.00 0.00 0.00 Average: veth06c4d9f 1.28 1.65 0.20 0.23 0.00 0.00 0.00 0.00 Average: veth11195e4 13.27 16.11 4.82 3.83 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21362) 07/04/24 _x86_64_ (8 CPU) 13:19:57 LINUX RESTART (8 CPU) 13:20:01 CPU %user %nice %system %iowait %steal %idle 13:21:02 all 7.33 0.00 1.05 3.82 0.04 87.76 13:21:02 0 7.30 0.00 1.78 4.88 0.02 86.02 13:21:02 1 8.68 0.00 1.20 0.62 0.05 89.45 13:21:02 2 6.27 0.00 1.45 6.84 0.05 85.39 13:21:02 3 5.66 0.00 0.82 12.87 0.03 80.61 13:21:02 4 3.47 0.00 0.57 0.68 0.03 95.25 13:21:02 5 7.98 0.00 0.43 3.41 0.05 88.13 13:21:02 6 4.06 0.00 0.89 0.42 0.03 94.60 13:21:02 7 15.21 0.00 1.25 0.90 0.05 82.58 13:22:01 all 14.40 0.00 4.05 5.67 0.05 75.82 13:22:01 0 11.34 0.00 3.69 4.89 0.05 80.03 13:22:01 1 8.41 0.00 4.28 12.36 0.07 74.88 13:22:01 2 8.83 0.00 4.01 13.81 0.03 73.32 13:22:01 3 8.78 0.00 4.07 1.93 0.05 85.16 13:22:01 4 8.40 0.00 3.61 1.12 0.03 86.83 13:22:01 5 8.84 0.00 3.44 1.33 0.03 86.36 13:22:01 6 35.05 0.00 4.86 4.30 0.09 55.71 13:22:01 7 25.60 0.00 4.44 5.65 0.05 64.26 13:23:01 all 22.85 0.00 3.93 7.93 0.07 65.21 13:23:01 0 20.69 0.00 3.84 5.83 0.05 69.59 13:23:01 1 14.51 0.00 3.56 25.72 0.07 56.15 13:23:01 2 23.66 0.00 4.09 16.43 0.07 55.75 13:23:01 3 21.98 0.00 4.08 2.28 0.07 71.59 13:23:01 4 23.16 0.00 3.53 0.59 0.08 72.63 13:23:01 5 27.08 0.00 4.03 5.02 0.07 63.81 13:23:01 6 28.55 0.00 4.55 2.11 0.07 64.73 13:23:01 7 23.17 0.00 3.76 5.51 0.07 67.49 13:24:01 all 8.65 0.00 1.99 2.98 0.07 86.32 13:24:01 0 6.85 0.00 2.15 2.69 0.07 88.25 13:24:01 1 7.14 0.00 1.78 5.18 0.05 85.86 13:24:01 2 11.01 0.00 2.06 1.11 0.08 85.73 13:24:01 3 10.34 0.00 2.55 1.01 0.07 86.03 13:24:01 4 9.27 0.00 1.92 3.53 0.07 85.21 13:24:01 5 8.71 0.00 1.75 2.27 0.08 87.19 13:24:01 6 8.17 0.00 1.83 6.32 0.05 83.62 13:24:01 7 7.74 0.00 1.89 1.73 0.07 88.57 Average: all 13.30 0.00 2.75 5.10 0.06 78.80 Average: 0 11.55 0.00 2.86 4.57 0.05 80.97 Average: 1 9.69 0.00 2.70 10.95 0.06 76.61 Average: 2 12.44 0.00 2.90 9.52 0.06 75.08 Average: 3 11.70 0.00 2.87 4.54 0.05 80.83 Average: 4 11.07 0.00 2.40 1.48 0.05 84.99 Average: 5 13.16 0.00 2.41 3.01 0.06 81.36 Average: 6 18.89 0.00 3.02 3.28 0.06 74.75 Average: 7 17.89 0.00 2.83 3.43 0.06 75.79