Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-74643 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-drools-pdp-newdelhi-project-csit-drools-pdp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-ETjbLH8TAHhm/agent.2160 SSH_AGENT_PID=2162 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-drools-pdp-newdelhi-project-csit-drools-pdp@tmp/private_key_6932409499355265001.key (/w/workspace/policy-drools-pdp-newdelhi-project-csit-drools-pdp@tmp/private_key_6932409499355265001.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-drools-pdp-newdelhi-project-csit-drools-pdp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/newdelhi^{commit} # timeout=10 Checking out Revision a0de87f9d2d88fd7f870703053c99c7149d608ec (refs/remotes/origin/newdelhi) > git config core.sparsecheckout # timeout=10 > git checkout -f a0de87f9d2d88fd7f870703053c99c7149d608ec # timeout=30 Commit message: "Fix timeout in pap CSIT for auditing undeploys" > git rev-list --no-walk a0de87f9d2d88fd7f870703053c99c7149d608ec # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-drools-pdp-newdelhi-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins8387413518031447461.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-y7p4 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-y7p4/bin to PATH Generating Requirements File Python 3.10.6 pip 24.2 from /tmp/venv-y7p4/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.5.0 aspy.yaml==1.3.0 attrs==24.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.35.34 botocore==1.35.34 bs4==0.0.2 cachetools==5.5.0 certifi==2024.8.30 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.7.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.3 durationpy==0.9 email_validator==2.2.0 filelock==3.16.1 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.35.0 httplib2==0.22.0 identify==2.6.1 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.4 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.23.0 jsonschema-specifications==2023.12.1 keystoneauth1==5.8.0 kubernetes==31.0.0 lftools==0.37.10 lxml==5.3.0 MarkupSafe==2.1.5 msgpack==1.1.0 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 netifaces==0.11.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==4.0.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.1.0 oslo.config==9.6.0 oslo.context==5.6.0 oslo.i18n==6.4.0 oslo.log==6.1.2 oslo.serialization==5.5.0 oslo.utils==7.3.0 packaging==24.1 pbr==6.1.0 platformdirs==4.3.6 prettytable==3.11.0 pyasn1==0.6.1 pyasn1_modules==0.4.1 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.4.0 PyJWT==2.9.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.6.0 python-dateutil==2.9.0.post0 python-heatclient==4.0.0 python-jenkins==1.8.2 python-keystoneclient==5.5.0 python-magnumclient==4.7.0 python-openstackclient==7.1.2 python-swiftclient==4.6.0 PyYAML==6.0.2 referencing==0.35.1 requests==2.32.3 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.20.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.2 simplejson==3.19.3 six==1.16.0 smmap==5.0.1 soupsieve==2.6 stevedore==5.3.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.2 tqdm==4.66.5 typing_extensions==4.12.2 tzdata==2024.2 urllib3==1.26.20 virtualenv==20.26.6 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-drools-pdp-newdelhi-project-csit-drools-pdp] $ /bin/sh /tmp/jenkins1711255737668462254.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-drools-pdp-newdelhi-project-csit-drools-pdp] $ /bin/sh -xe /tmp/jenkins13236193966238946754.sh + /w/workspace/policy-drools-pdp-newdelhi-project-csit-drools-pdp/csit/run-project-csit.sh drools-pdp WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 60.0M 100 60.0M 0 0 153M 0 --:--:-- --:--:-- --:--:-- 153M Setting project configuration for: drools-pdp Configuring docker compose... Starting drools-pdp application pap Pulling drools-pdp Pulling policy-db-migrator Pulling api Pulling zookeeper Pulling mariadb Pulling kafka Pulling 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer 145e9fcd3938 Pulling fs layer 4be774fd73e2 Pulling fs layer 71f834c33815 Pulling fs layer a40760cd2625 Pulling fs layer 114f99593bd8 Pulling fs layer 71f834c33815 Waiting 114f99593bd8 Waiting a40760cd2625 Waiting 4be774fd73e2 Waiting 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer 1fe734c5fee3 Pulling fs layer c8e6f0452a8e Pulling fs layer 0143f8517101 Pulling fs layer ee69cc1a77e2 Pulling fs layer 81667b400b57 Pulling fs layer ec3b6d0cc414 Pulling fs layer a8d3998ab21c Pulling fs layer 89d6e2ec6372 Pulling fs layer 80096f8bb25e Pulling fs layer cbd359ebc87d Pulling fs layer 1fe734c5fee3 Waiting c8e6f0452a8e Waiting 0143f8517101 Waiting 81667b400b57 Waiting ec3b6d0cc414 Waiting a8d3998ab21c Waiting 89d6e2ec6372 Waiting 80096f8bb25e Waiting cbd359ebc87d Waiting 145e9fcd3938 Downloading [==================================================>] 294B/294B 31e352740f53 Downloading [> ] 48.06kB/3.398MB 145e9fcd3938 Verifying Checksum 31e352740f53 Downloading [> ] 48.06kB/3.398MB 145e9fcd3938 Download complete 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer bda0b253c68f Pulling fs layer b9357b55a7a5 Pulling fs layer 4c3047628e17 Pulling fs layer 6cf350721225 Pulling fs layer de723b4c7ed9 Pulling fs layer 31e352740f53 Downloading [> ] 48.06kB/3.398MB b9357b55a7a5 Waiting bda0b253c68f Waiting 4c3047628e17 Waiting 6cf350721225 Waiting 31e352740f53 Pulling fs layer ad1782e4d1ef Pulling fs layer bc8105c6553b Pulling fs layer 929241f867bb Pulling fs layer 37728a7352e6 Pulling fs layer 3682c4012c1d Pulling fs layer 3ddad7f6e85c Pulling fs layer 7162b4201f77 Pulling fs layer e4ef9fa2caeb Pulling fs layer 31e352740f53 Downloading [> ] 48.06kB/3.398MB ad1782e4d1ef Waiting bc8105c6553b Waiting 37728a7352e6 Waiting 929241f867bb Waiting e4ef9fa2caeb Waiting 3ddad7f6e85c Waiting 7162b4201f77 Waiting ecc4de98d537 Downloading [> ] 539.6kB/73.93MB ecc4de98d537 Downloading [> ] 539.6kB/73.93MB ecc4de98d537 Downloading [> ] 539.6kB/73.93MB 10ac4908093d Pulling fs layer 44779101e748 Pulling fs layer a721db3e3f3d Pulling fs layer 1850a929b84a Pulling fs layer 397a918c7da3 Pulling fs layer 806be17e856d Pulling fs layer 634de6c90876 Pulling fs layer cd00854cfb1a Pulling fs layer 10ac4908093d Waiting 44779101e748 Waiting a721db3e3f3d Waiting 634de6c90876 Waiting 1850a929b84a Waiting 397a918c7da3 Waiting cd00854cfb1a Waiting 806be17e856d Waiting 4be774fd73e2 Downloading [=> ] 3.001kB/127.4kB 4be774fd73e2 Downloading [==================================================>] 127.4kB/127.4kB 4be774fd73e2 Verifying Checksum 4be774fd73e2 Download complete 71f834c33815 Downloading [==================================================>] 1.147kB/1.147kB 71f834c33815 Verifying Checksum 71f834c33815 Download complete 31e352740f53 Verifying Checksum 31e352740f53 Verifying Checksum 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB a40760cd2625 Downloading [> ] 539.6kB/84.46MB 114f99593bd8 Downloading [==================================================>] 1.119kB/1.119kB 114f99593bd8 Verifying Checksum 114f99593bd8 Download complete 1fe734c5fee3 Downloading [> ] 343kB/32.94MB ecc4de98d537 Downloading [=======> ] 11.35MB/73.93MB ecc4de98d537 Downloading [=======> ] 11.35MB/73.93MB ecc4de98d537 Downloading [=======> ] 11.35MB/73.93MB 31e352740f53 Extracting [========> ] 589.8kB/3.398MB 31e352740f53 Extracting [========> ] 589.8kB/3.398MB 31e352740f53 Extracting [========> ] 589.8kB/3.398MB 31e352740f53 Extracting [========> ] 589.8kB/3.398MB a40760cd2625 Downloading [======> ] 10.27MB/84.46MB 1fe734c5fee3 Downloading [===============> ] 9.977MB/32.94MB ecc4de98d537 Downloading [=================> ] 25.95MB/73.93MB ecc4de98d537 Downloading [=================> ] 25.95MB/73.93MB ecc4de98d537 Downloading [=================> ] 25.95MB/73.93MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB a40760cd2625 Downloading [==============> ] 23.79MB/84.46MB 1fe734c5fee3 Downloading [==================================> ] 22.71MB/32.94MB 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete 56f27190e824 Pulling fs layer 8e70b9b9b078 Pulling fs layer 56f27190e824 Waiting 732c9ebb730c Pulling fs layer ed746366f1b8 Pulling fs layer 10894799ccd9 Pulling fs layer 8d377259558c Pulling fs layer e7688095d1e6 Pulling fs layer 8eab815b3593 Pulling fs layer 00ded6dd259e Pulling fs layer 296f622c8150 Pulling fs layer 4ee3050cff6b Pulling fs layer 98acab318002 Pulling fs layer 878348106a95 Pulling fs layer 8e70b9b9b078 Waiting e7688095d1e6 Waiting 8eab815b3593 Waiting 00ded6dd259e Waiting 296f622c8150 Waiting 4ee3050cff6b Waiting 98acab318002 Waiting 878348106a95 Waiting 732c9ebb730c Waiting ed746366f1b8 Waiting 10894799ccd9 Waiting 8d377259558c Waiting ecc4de98d537 Downloading [=========================> ] 37.85MB/73.93MB ecc4de98d537 Downloading [=========================> ] 37.85MB/73.93MB ecc4de98d537 Downloading [=========================> ] 37.85MB/73.93MB 56f27190e824 Pulling fs layer 8e70b9b9b078 Pulling fs layer 732c9ebb730c Pulling fs layer ed746366f1b8 Pulling fs layer 10894799ccd9 Pulling fs layer 8d377259558c Pulling fs layer e7688095d1e6 Pulling fs layer 8eab815b3593 Pulling fs layer 00ded6dd259e Pulling fs layer 296f622c8150 Pulling fs layer 4ee3050cff6b Pulling fs layer 519f42193ec8 Pulling fs layer 5df3538dc51e Pulling fs layer 00ded6dd259e Waiting 56f27190e824 Waiting 8e70b9b9b078 Waiting 732c9ebb730c Waiting ed746366f1b8 Waiting 296f622c8150 Waiting e7688095d1e6 Waiting 8d377259558c Waiting 5df3538dc51e Waiting 10894799ccd9 Waiting 8eab815b3593 Waiting 519f42193ec8 Waiting 4ee3050cff6b Waiting a40760cd2625 Downloading [=====================> ] 35.68MB/84.46MB 1fe734c5fee3 Verifying Checksum 1fe734c5fee3 Download complete c8e6f0452a8e Downloading [==================================================>] 1.076kB/1.076kB c8e6f0452a8e Verifying Checksum c8e6f0452a8e Download complete 0143f8517101 Downloading [============================> ] 3.003kB/5.324kB 0143f8517101 Downloading [==================================================>] 5.324kB/5.324kB 0143f8517101 Verifying Checksum 0143f8517101 Download complete ee69cc1a77e2 Downloading [============================> ] 3.003kB/5.312kB ee69cc1a77e2 Downloading [==================================================>] 5.312kB/5.312kB ee69cc1a77e2 Verifying Checksum ee69cc1a77e2 Download complete 81667b400b57 Downloading [==================================================>] 1.034kB/1.034kB 81667b400b57 Verifying Checksum 81667b400b57 Download complete ecc4de98d537 Downloading [===================================> ] 51.9MB/73.93MB ecc4de98d537 Downloading [===================================> ] 51.9MB/73.93MB ecc4de98d537 Downloading [===================================> ] 51.9MB/73.93MB ec3b6d0cc414 Downloading [==================================================>] 1.036kB/1.036kB ec3b6d0cc414 Verifying Checksum ec3b6d0cc414 Download complete a8d3998ab21c Downloading [==========> ] 3.002kB/13.9kB a8d3998ab21c Downloading [==================================================>] 13.9kB/13.9kB a8d3998ab21c Verifying Checksum a8d3998ab21c Download complete 89d6e2ec6372 Downloading [==========> ] 3.002kB/13.79kB 89d6e2ec6372 Download complete 80096f8bb25e Downloading [==================================================>] 2.238kB/2.238kB 80096f8bb25e Verifying Checksum 80096f8bb25e Download complete a40760cd2625 Downloading [=============================> ] 50.28MB/84.46MB cbd359ebc87d Downloading [==================================================>] 2.23kB/2.23kB cbd359ebc87d Verifying Checksum cbd359ebc87d Download complete bda0b253c68f Downloading [==================================================>] 292B/292B bda0b253c68f Verifying Checksum bda0b253c68f Download complete b9357b55a7a5 Downloading [=> ] 3.001kB/127kB b9357b55a7a5 Downloading [==================================================>] 127kB/127kB b9357b55a7a5 Verifying Checksum b9357b55a7a5 Download complete 4c3047628e17 Download complete ecc4de98d537 Downloading [==============================================> ] 68.12MB/73.93MB ecc4de98d537 Downloading [==============================================> ] 68.12MB/73.93MB ecc4de98d537 Downloading [==============================================> ] 68.12MB/73.93MB 6cf350721225 Downloading [> ] 539.6kB/98.32MB ecc4de98d537 Verifying Checksum ecc4de98d537 Download complete ecc4de98d537 Download complete ecc4de98d537 Download complete a40760cd2625 Downloading [=======================================> ] 67.04MB/84.46MB de723b4c7ed9 Downloading [==================================================>] 1.297kB/1.297kB de723b4c7ed9 Download complete ad1782e4d1ef Downloading [> ] 539.6kB/180.4MB 6cf350721225 Downloading [=====> ] 10.27MB/98.32MB a40760cd2625 Downloading [=================================================> ] 83.8MB/84.46MB a40760cd2625 Verifying Checksum a40760cd2625 Download complete bc8105c6553b Downloading [=> ] 3.002kB/84.13kB bc8105c6553b Download complete ecc4de98d537 Extracting [> ] 557.1kB/73.93MB ecc4de98d537 Extracting [> ] 557.1kB/73.93MB ecc4de98d537 Extracting [> ] 557.1kB/73.93MB ad1782e4d1ef Downloading [==> ] 9.731MB/180.4MB 929241f867bb Downloading [==================================================>] 92B/92B 929241f867bb Verifying Checksum 929241f867bb Download complete 37728a7352e6 Downloading [==================================================>] 92B/92B 37728a7352e6 Verifying Checksum 37728a7352e6 Download complete 3682c4012c1d Downloading [> ] 490.4kB/47.99MB 6cf350721225 Downloading [============> ] 25.41MB/98.32MB ad1782e4d1ef Downloading [======> ] 22.71MB/180.4MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB 3682c4012c1d Downloading [==========> ] 9.829MB/47.99MB 6cf350721225 Downloading [====================> ] 41.09MB/98.32MB ad1782e4d1ef Downloading [==========> ] 36.76MB/180.4MB ecc4de98d537 Extracting [======> ] 9.47MB/73.93MB ecc4de98d537 Extracting [======> ] 9.47MB/73.93MB ecc4de98d537 Extracting [======> ] 9.47MB/73.93MB 3682c4012c1d Downloading [========================> ] 23.1MB/47.99MB 6cf350721225 Downloading [===========================> ] 54.07MB/98.32MB ad1782e4d1ef Downloading [=============> ] 49.74MB/180.4MB ecc4de98d537 Extracting [==========> ] 16.15MB/73.93MB ecc4de98d537 Extracting [==========> ] 16.15MB/73.93MB ecc4de98d537 Extracting [==========> ] 16.15MB/73.93MB 3682c4012c1d Downloading [======================================> ] 37.35MB/47.99MB 6cf350721225 Downloading [==================================> ] 67.04MB/98.32MB ad1782e4d1ef Downloading [=================> ] 62.18MB/180.4MB ecc4de98d537 Extracting [===============> ] 23.4MB/73.93MB ecc4de98d537 Extracting [===============> ] 23.4MB/73.93MB ecc4de98d537 Extracting [===============> ] 23.4MB/73.93MB 3682c4012c1d Verifying Checksum 3682c4012c1d Download complete 3ddad7f6e85c Downloading [==================================================>] 373B/373B 3ddad7f6e85c Verifying Checksum 3ddad7f6e85c Download complete 6cf350721225 Downloading [========================================> ] 80.56MB/98.32MB 7162b4201f77 Downloading [> ] 539.6kB/96.48MB ad1782e4d1ef Downloading [=====================> ] 77.31MB/180.4MB ecc4de98d537 Extracting [====================> ] 30.08MB/73.93MB ecc4de98d537 Extracting [====================> ] 30.08MB/73.93MB ecc4de98d537 Extracting [====================> ] 30.08MB/73.93MB 6cf350721225 Downloading [================================================> ] 94.62MB/98.32MB 7162b4201f77 Downloading [===> ] 7.568MB/96.48MB 6cf350721225 Verifying Checksum 6cf350721225 Download complete ad1782e4d1ef Downloading [=========================> ] 90.83MB/180.4MB e4ef9fa2caeb Downloading [> ] 539.6kB/97.42MB ecc4de98d537 Extracting [========================> ] 36.21MB/73.93MB ecc4de98d537 Extracting [========================> ] 36.21MB/73.93MB ecc4de98d537 Extracting [========================> ] 36.21MB/73.93MB 7162b4201f77 Downloading [==========> ] 21.09MB/96.48MB ad1782e4d1ef Downloading [=============================> ] 104.9MB/180.4MB e4ef9fa2caeb Downloading [=====> ] 10.27MB/97.42MB ecc4de98d537 Extracting [============================> ] 42.34MB/73.93MB ecc4de98d537 Extracting [============================> ] 42.34MB/73.93MB ecc4de98d537 Extracting [============================> ] 42.34MB/73.93MB 7162b4201f77 Downloading [================> ] 31.9MB/96.48MB ad1782e4d1ef Downloading [================================> ] 118.9MB/180.4MB e4ef9fa2caeb Downloading [===========> ] 23.25MB/97.42MB ecc4de98d537 Extracting [================================> ] 47.91MB/73.93MB ecc4de98d537 Extracting [================================> ] 47.91MB/73.93MB ecc4de98d537 Extracting [================================> ] 47.91MB/73.93MB 7162b4201f77 Downloading [======================> ] 43.79MB/96.48MB ad1782e4d1ef Downloading [====================================> ] 132.5MB/180.4MB e4ef9fa2caeb Downloading [==================> ] 36.22MB/97.42MB ecc4de98d537 Extracting [===================================> ] 52.92MB/73.93MB ecc4de98d537 Extracting [===================================> ] 52.92MB/73.93MB ecc4de98d537 Extracting [===================================> ] 52.92MB/73.93MB 7162b4201f77 Downloading [=============================> ] 56.77MB/96.48MB ad1782e4d1ef Downloading [========================================> ] 146.5MB/180.4MB e4ef9fa2caeb Downloading [=========================> ] 49.2MB/97.42MB ecc4de98d537 Extracting [=======================================> ] 58.49MB/73.93MB ecc4de98d537 Extracting [=======================================> ] 58.49MB/73.93MB ecc4de98d537 Extracting [=======================================> ] 58.49MB/73.93MB 7162b4201f77 Downloading [====================================> ] 70.29MB/96.48MB ad1782e4d1ef Downloading [============================================> ] 160MB/180.4MB e4ef9fa2caeb Downloading [================================> ] 62.72MB/97.42MB ecc4de98d537 Extracting [============================================> ] 66.29MB/73.93MB ecc4de98d537 Extracting [============================================> ] 66.29MB/73.93MB ecc4de98d537 Extracting [============================================> ] 66.29MB/73.93MB 7162b4201f77 Downloading [===========================================> ] 83.8MB/96.48MB ad1782e4d1ef Downloading [================================================> ] 174.1MB/180.4MB e4ef9fa2caeb Downloading [=======================================> ] 77.86MB/97.42MB ecc4de98d537 Extracting [=================================================> ] 72.97MB/73.93MB ecc4de98d537 Extracting [=================================================> ] 72.97MB/73.93MB ecc4de98d537 Extracting [=================================================> ] 72.97MB/73.93MB ad1782e4d1ef Verifying Checksum ad1782e4d1ef Download complete 7162b4201f77 Downloading [=================================================> ] 96.24MB/96.48MB 7162b4201f77 Verifying Checksum 7162b4201f77 Download complete 10ac4908093d Downloading [> ] 310.2kB/30.43MB 44779101e748 Downloading [==================================================>] 1.744kB/1.744kB 44779101e748 Verifying Checksum 44779101e748 Download complete a721db3e3f3d Downloading [> ] 64.45kB/5.526MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB e4ef9fa2caeb Downloading [===============================================> ] 91.91MB/97.42MB ad1782e4d1ef Extracting [> ] 557.1kB/180.4MB e4ef9fa2caeb Verifying Checksum e4ef9fa2caeb Download complete a721db3e3f3d Downloading [================================> ] 3.538MB/5.526MB 10ac4908093d Downloading [========> ] 5.291MB/30.43MB ecc4de98d537 Pull complete ecc4de98d537 Pull complete ecc4de98d537 Pull complete bda0b253c68f Extracting [==================================================>] 292B/292B 145e9fcd3938 Extracting [==================================================>] 294B/294B 145e9fcd3938 Extracting [==================================================>] 294B/294B 1850a929b84a Downloading [==================================================>] 149B/149B 1850a929b84a Verifying Checksum 1850a929b84a Download complete ad1782e4d1ef Extracting [> ] 2.228MB/180.4MB a721db3e3f3d Verifying Checksum a721db3e3f3d Download complete bda0b253c68f Extracting [==================================================>] 292B/292B 397a918c7da3 Downloading [==================================================>] 327B/327B 397a918c7da3 Download complete 806be17e856d Downloading [> ] 539.6kB/89.72MB 634de6c90876 Downloading [===========================================> ] 3.011kB/3.49kB 634de6c90876 Downloading [==================================================>] 3.49kB/3.49kB 634de6c90876 Verifying Checksum 634de6c90876 Download complete cd00854cfb1a Downloading [=====================> ] 3.011kB/6.971kB cd00854cfb1a Downloading [==================================================>] 6.971kB/6.971kB cd00854cfb1a Verifying Checksum cd00854cfb1a Download complete 10ac4908093d Downloading [=================================> ] 20.54MB/30.43MB ad1782e4d1ef Extracting [==> ] 8.913MB/180.4MB 1fe734c5fee3 Extracting [> ] 360.4kB/32.94MB 145e9fcd3938 Pull complete bda0b253c68f Pull complete 806be17e856d Downloading [=====> ] 9.19MB/89.72MB b9357b55a7a5 Extracting [============> ] 32.77kB/127kB b9357b55a7a5 Extracting [==================================================>] 127kB/127kB b9357b55a7a5 Extracting [==================================================>] 127kB/127kB 4be774fd73e2 Extracting [============> ] 32.77kB/127.4kB 4be774fd73e2 Extracting [==================================================>] 127.4kB/127.4kB 56f27190e824 Downloading [> ] 387.3kB/37.11MB 56f27190e824 Downloading [> ] 387.3kB/37.11MB 10ac4908093d Verifying Checksum 10ac4908093d Download complete ad1782e4d1ef Extracting [====> ] 15.04MB/180.4MB 1fe734c5fee3 Extracting [====> ] 3.244MB/32.94MB 806be17e856d Downloading [============> ] 23.25MB/89.72MB 56f27190e824 Downloading [=====================> ] 16.23MB/37.11MB 56f27190e824 Downloading [=====================> ] 16.23MB/37.11MB 8e70b9b9b078 Downloading [> ] 527.6kB/272.7MB 8e70b9b9b078 Downloading [> ] 527.6kB/272.7MB 1fe734c5fee3 Extracting [========> ] 5.767MB/32.94MB 10ac4908093d Extracting [> ] 327.7kB/30.43MB 806be17e856d Downloading [======================> ] 40.55MB/89.72MB 56f27190e824 Downloading [==========================================> ] 31.68MB/37.11MB 56f27190e824 Downloading [==========================================> ] 31.68MB/37.11MB 8e70b9b9b078 Downloading [==> ] 14MB/272.7MB 8e70b9b9b078 Downloading [==> ] 14MB/272.7MB 56f27190e824 Verifying Checksum 56f27190e824 Verifying Checksum 56f27190e824 Download complete 56f27190e824 Download complete ad1782e4d1ef Extracting [=======> ] 25.62MB/180.4MB 1fe734c5fee3 Extracting [==============> ] 9.732MB/32.94MB 806be17e856d Downloading [=============================> ] 53.53MB/89.72MB 10ac4908093d Extracting [====> ] 2.621MB/30.43MB 8e70b9b9b078 Downloading [====> ] 25.33MB/272.7MB 8e70b9b9b078 Downloading [====> ] 25.33MB/272.7MB 732c9ebb730c Downloading [================================> ] 719B/1.111kB 732c9ebb730c Downloading [==================================================>] 1.111kB/1.111kB 732c9ebb730c Downloading [================================> ] 719B/1.111kB 732c9ebb730c Downloading [==================================================>] 1.111kB/1.111kB 732c9ebb730c Verifying Checksum 732c9ebb730c Verifying Checksum 732c9ebb730c Download complete 732c9ebb730c Download complete 56f27190e824 Extracting [> ] 393.2kB/37.11MB 56f27190e824 Extracting [> ] 393.2kB/37.11MB ad1782e4d1ef Extracting [========> ] 30.64MB/180.4MB 1fe734c5fee3 Extracting [===================> ] 12.98MB/32.94MB 806be17e856d Downloading [====================================> ] 64.88MB/89.72MB 10ac4908093d Extracting [=========> ] 5.898MB/30.43MB ed746366f1b8 Downloading [> ] 85.77kB/8.378MB ed746366f1b8 Downloading [> ] 85.77kB/8.378MB 8e70b9b9b078 Downloading [=======> ] 38.26MB/272.7MB 8e70b9b9b078 Downloading [=======> ] 38.26MB/272.7MB 56f27190e824 Extracting [=====> ] 3.932MB/37.11MB 56f27190e824 Extracting [=====> ] 3.932MB/37.11MB ed746366f1b8 Download complete ed746366f1b8 Download complete ad1782e4d1ef Extracting [===========> ] 40.67MB/180.4MB 806be17e856d Downloading [==========================================> ] 75.69MB/89.72MB 8e70b9b9b078 Downloading [=========> ] 53.87MB/272.7MB 8e70b9b9b078 Downloading [=========> ] 53.87MB/272.7MB 1fe734c5fee3 Extracting [=====================> ] 14.42MB/32.94MB 10ac4908093d Extracting [============> ] 7.864MB/30.43MB 10894799ccd9 Downloading [===> ] 1.369kB/21.28kB 10894799ccd9 Downloading [===> ] 1.369kB/21.28kB 10894799ccd9 Download complete 10894799ccd9 Download complete 806be17e856d Downloading [=================================================> ] 88.67MB/89.72MB 56f27190e824 Extracting [========> ] 6.291MB/37.11MB 56f27190e824 Extracting [========> ] 6.291MB/37.11MB 806be17e856d Verifying Checksum 806be17e856d Download complete ad1782e4d1ef Extracting [=============> ] 47.91MB/180.4MB 8e70b9b9b078 Downloading [============> ] 67.32MB/272.7MB 8e70b9b9b078 Downloading [============> ] 67.32MB/272.7MB 1fe734c5fee3 Extracting [=========================> ] 16.58MB/32.94MB 56f27190e824 Extracting [==========> ] 7.864MB/37.11MB 56f27190e824 Extracting [==========> ] 7.864MB/37.11MB ad1782e4d1ef Extracting [==============> ] 51.25MB/180.4MB 8d377259558c Downloading [> ] 441.6kB/43.24MB 8d377259558c Downloading [> ] 441.6kB/43.24MB 10ac4908093d Extracting [===============> ] 9.175MB/30.43MB 8e70b9b9b078 Downloading [===============> ] 82.41MB/272.7MB 8e70b9b9b078 Downloading [===============> ] 82.41MB/272.7MB e7688095d1e6 Downloading [================================> ] 719B/1.106kB e7688095d1e6 Downloading [================================> ] 719B/1.106kB e7688095d1e6 Downloading [==================================================>] 1.106kB/1.106kB e7688095d1e6 Downloading [==================================================>] 1.106kB/1.106kB e7688095d1e6 Verifying Checksum e7688095d1e6 Download complete e7688095d1e6 Verifying Checksum e7688095d1e6 Download complete 1fe734c5fee3 Extracting [===========================> ] 18.02MB/32.94MB 8d377259558c Downloading [==================> ] 16.34MB/43.24MB 8d377259558c Downloading [==================> ] 16.34MB/43.24MB 8eab815b3593 Downloading [==========================================> ] 720B/853B 8eab815b3593 Downloading [==========================================> ] 720B/853B 8eab815b3593 Download complete 8eab815b3593 Downloading [==================================================>] 853B/853B 8e70b9b9b078 Downloading [=================> ] 97.48MB/272.7MB 8e70b9b9b078 Downloading [=================> ] 97.48MB/272.7MB 10ac4908093d Extracting [==================> ] 11.47MB/30.43MB 1fe734c5fee3 Extracting [=============================> ] 19.46MB/32.94MB ad1782e4d1ef Extracting [================> ] 59.6MB/180.4MB 56f27190e824 Extracting [=============> ] 9.83MB/37.11MB 56f27190e824 Extracting [=============> ] 9.83MB/37.11MB 8d377259558c Downloading [================================> ] 27.85MB/43.24MB 8d377259558c Downloading [================================> ] 27.85MB/43.24MB 00ded6dd259e Downloading [==================================================>] 98B/98B 00ded6dd259e Downloading [==================================================>] 98B/98B 00ded6dd259e Verifying Checksum 00ded6dd259e Download complete 00ded6dd259e Verifying Checksum 00ded6dd259e Download complete 8e70b9b9b078 Downloading [====================> ] 110.5MB/272.7MB 8e70b9b9b078 Downloading [====================> ] 110.5MB/272.7MB 4be774fd73e2 Pull complete 10ac4908093d Extracting [==========================> ] 16.06MB/30.43MB b9357b55a7a5 Pull complete 56f27190e824 Extracting [================> ] 12.58MB/37.11MB 56f27190e824 Extracting [================> ] 12.58MB/37.11MB ad1782e4d1ef Extracting [==================> ] 66.85MB/180.4MB 71f834c33815 Extracting [==================================================>] 1.147kB/1.147kB 71f834c33815 Extracting [==================================================>] 1.147kB/1.147kB 4c3047628e17 Extracting [==================================================>] 1.324kB/1.324kB 1fe734c5fee3 Extracting [================================> ] 21.27MB/32.94MB 8d377259558c Downloading [===============================================> ] 41.09MB/43.24MB 8d377259558c Downloading [===============================================> ] 41.09MB/43.24MB 296f622c8150 Verifying Checksum 296f622c8150 Downloading [==================================================>] 172B/172B 296f622c8150 Verifying Checksum 296f622c8150 Download complete 296f622c8150 Download complete 8e70b9b9b078 Downloading [======================> ] 122.3MB/272.7MB 8e70b9b9b078 Downloading [======================> ] 122.3MB/272.7MB 8d377259558c Verifying Checksum 8d377259558c Download complete 8d377259558c Verifying Checksum 8d377259558c Download complete 10ac4908093d Extracting [===============================> ] 19.01MB/30.43MB 56f27190e824 Extracting [====================> ] 15.34MB/37.11MB 56f27190e824 Extracting [====================> ] 15.34MB/37.11MB ad1782e4d1ef Extracting [====================> ] 73.53MB/180.4MB 1fe734c5fee3 Extracting [===================================> ] 23.07MB/32.94MB 4ee3050cff6b Downloading [> ] 3.289kB/230.6kB 4ee3050cff6b Downloading [> ] 3.289kB/230.6kB 4ee3050cff6b Downloading [==================================================>] 230.6kB/230.6kB 4ee3050cff6b Verifying Checksum 4ee3050cff6b Verifying Checksum 4ee3050cff6b Download complete 4ee3050cff6b Download complete 98acab318002 Downloading [> ] 526.6kB/121.9MB 8e70b9b9b078 Downloading [========================> ] 134.7MB/272.7MB 8e70b9b9b078 Downloading [========================> ] 134.7MB/272.7MB 10ac4908093d Extracting [===================================> ] 21.63MB/30.43MB 878348106a95 Downloading [==========> ] 719B/3.447kB 878348106a95 Downloading [==================================================>] 3.447kB/3.447kB 878348106a95 Verifying Checksum 878348106a95 Download complete ad1782e4d1ef Extracting [======================> ] 81.33MB/180.4MB 56f27190e824 Extracting [========================> ] 18.09MB/37.11MB 56f27190e824 Extracting [========================> ] 18.09MB/37.11MB 4c3047628e17 Pull complete 71f834c33815 Pull complete 1fe734c5fee3 Extracting [====================================> ] 23.79MB/32.94MB 98acab318002 Downloading [===> ] 9.665MB/121.9MB 8e70b9b9b078 Downloading [==========================> ] 143.8MB/272.7MB 8e70b9b9b078 Downloading [==========================> ] 143.8MB/272.7MB 10ac4908093d Extracting [======================================> ] 23.27MB/30.43MB 56f27190e824 Extracting [=============================> ] 22.02MB/37.11MB 56f27190e824 Extracting [=============================> ] 22.02MB/37.11MB ad1782e4d1ef Extracting [=======================> ] 85.79MB/180.4MB 6cf350721225 Extracting [> ] 557.1kB/98.32MB 519f42193ec8 Downloading [> ] 527.6kB/121.9MB 8e70b9b9b078 Downloading [============================> ] 154MB/272.7MB 8e70b9b9b078 Downloading [============================> ] 154MB/272.7MB 98acab318002 Downloading [=========> ] 23.17MB/121.9MB 10ac4908093d Extracting [===========================================> ] 26.21MB/30.43MB 1fe734c5fee3 Extracting [=======================================> ] 25.95MB/32.94MB a40760cd2625 Extracting [> ] 557.1kB/84.46MB 56f27190e824 Extracting [==================================> ] 25.56MB/37.11MB 56f27190e824 Extracting [==================================> ] 25.56MB/37.11MB 6cf350721225 Extracting [====> ] 8.356MB/98.32MB ad1782e4d1ef Extracting [========================> ] 89.69MB/180.4MB 519f42193ec8 Downloading [====> ] 11.3MB/121.9MB 8e70b9b9b078 Downloading [==============================> ] 166.9MB/272.7MB 8e70b9b9b078 Downloading [==============================> ] 166.9MB/272.7MB 10ac4908093d Extracting [===========================================> ] 26.54MB/30.43MB 98acab318002 Downloading [==============> ] 34.47MB/121.9MB 1fe734c5fee3 Extracting [=========================================> ] 27.39MB/32.94MB a40760cd2625 Extracting [==> ] 5.014MB/84.46MB 6cf350721225 Extracting [======> ] 12.81MB/98.32MB 56f27190e824 Extracting [=====================================> ] 27.92MB/37.11MB 56f27190e824 Extracting [=====================================> ] 27.92MB/37.11MB 519f42193ec8 Downloading [==========> ] 25.33MB/121.9MB 8e70b9b9b078 Downloading [================================> ] 179.8MB/272.7MB 8e70b9b9b078 Downloading [================================> ] 179.8MB/272.7MB ad1782e4d1ef Extracting [=========================> ] 91.91MB/180.4MB 98acab318002 Downloading [===================> ] 48.46MB/121.9MB 10ac4908093d Extracting [===============================================> ] 28.84MB/30.43MB a40760cd2625 Extracting [=======> ] 12.81MB/84.46MB 1fe734c5fee3 Extracting [=============================================> ] 29.92MB/32.94MB 6cf350721225 Extracting [========> ] 17.27MB/98.32MB 519f42193ec8 Downloading [==============> ] 36.09MB/121.9MB 8e70b9b9b078 Downloading [==================================> ] 189.5MB/272.7MB 8e70b9b9b078 Downloading [==================================> ] 189.5MB/272.7MB 56f27190e824 Extracting [========================================> ] 30.28MB/37.11MB 56f27190e824 Extracting [========================================> ] 30.28MB/37.11MB 98acab318002 Downloading [========================> ] 60.88MB/121.9MB ad1782e4d1ef Extracting [==========================> ] 94.7MB/180.4MB 10ac4908093d Extracting [=================================================> ] 30.15MB/30.43MB a40760cd2625 Extracting [===========> ] 19.5MB/84.46MB 1fe734c5fee3 Extracting [===============================================> ] 31.36MB/32.94MB 6cf350721225 Extracting [===========> ] 23.4MB/98.32MB 519f42193ec8 Downloading [===================> ] 48.47MB/121.9MB 1fe734c5fee3 Extracting [==================================================>] 32.94MB/32.94MB 8e70b9b9b078 Downloading [=====================================> ] 201.9MB/272.7MB 8e70b9b9b078 Downloading [=====================================> ] 201.9MB/272.7MB 56f27190e824 Extracting [===========================================> ] 32.64MB/37.11MB 56f27190e824 Extracting [===========================================> ] 32.64MB/37.11MB ad1782e4d1ef Extracting [==========================> ] 96.93MB/180.4MB 98acab318002 Downloading [============================> ] 70.59MB/121.9MB 10ac4908093d Extracting [==================================================>] 30.43MB/30.43MB a40760cd2625 Extracting [==============> ] 23.95MB/84.46MB 6cf350721225 Extracting [==============> ] 27.85MB/98.32MB 98acab318002 Downloading [==============================> ] 74.38MB/121.9MB 519f42193ec8 Downloading [========================> ] 60.27MB/121.9MB 8e70b9b9b078 Downloading [======================================> ] 211.6MB/272.7MB 8e70b9b9b078 Downloading [======================================> ] 211.6MB/272.7MB 1fe734c5fee3 Pull complete ad1782e4d1ef Extracting [===========================> ] 98.04MB/180.4MB c8e6f0452a8e Extracting [==================================================>] 1.076kB/1.076kB c8e6f0452a8e Extracting [==================================================>] 1.076kB/1.076kB 56f27190e824 Extracting [=============================================> ] 33.82MB/37.11MB 56f27190e824 Extracting [=============================================> ] 33.82MB/37.11MB a40760cd2625 Extracting [================> ] 28.41MB/84.46MB 6cf350721225 Extracting [=================> ] 34.54MB/98.32MB 519f42193ec8 Downloading [=============================> ] 72.66MB/121.9MB 8e70b9b9b078 Downloading [=======================================> ] 217MB/272.7MB 8e70b9b9b078 Downloading [=======================================> ] 217MB/272.7MB 98acab318002 Downloading [===================================> ] 85.68MB/121.9MB ad1782e4d1ef Extracting [===========================> ] 99.71MB/180.4MB 56f27190e824 Extracting [==============================================> ] 34.6MB/37.11MB 56f27190e824 Extracting [==============================================> ] 34.6MB/37.11MB 10ac4908093d Pull complete 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB a40760cd2625 Extracting [=====================> ] 36.77MB/84.46MB 6cf350721225 Extracting [====================> ] 40.11MB/98.32MB 519f42193ec8 Downloading [====================================> ] 89.36MB/121.9MB c8e6f0452a8e Pull complete 8e70b9b9b078 Downloading [==========================================> ] 229.9MB/272.7MB 8e70b9b9b078 Downloading [==========================================> ] 229.9MB/272.7MB 98acab318002 Downloading [========================================> ] 98.59MB/121.9MB 0143f8517101 Extracting [==================================================>] 5.324kB/5.324kB 0143f8517101 Extracting [==================================================>] 5.324kB/5.324kB ad1782e4d1ef Extracting [============================> ] 102.5MB/180.4MB 56f27190e824 Extracting [=================================================> ] 36.96MB/37.11MB 56f27190e824 Extracting [=================================================> ] 36.96MB/37.11MB 56f27190e824 Extracting [==================================================>] 37.11MB/37.11MB 56f27190e824 Extracting [==================================================>] 37.11MB/37.11MB a40760cd2625 Extracting [========================> ] 41.22MB/84.46MB 519f42193ec8 Downloading [==========================================> ] 102.8MB/121.9MB 6cf350721225 Extracting [======================> ] 45.12MB/98.32MB 98acab318002 Downloading [==============================================> ] 112.6MB/121.9MB 8e70b9b9b078 Downloading [=============================================> ] 247.6MB/272.7MB 8e70b9b9b078 Downloading [=============================================> ] 247.6MB/272.7MB 56f27190e824 Pull complete 56f27190e824 Pull complete 44779101e748 Pull complete ad1782e4d1ef Extracting [=============================> ] 105.3MB/180.4MB a721db3e3f3d Extracting [> ] 65.54kB/5.526MB 0143f8517101 Pull complete ee69cc1a77e2 Extracting [==================================================>] 5.312kB/5.312kB ee69cc1a77e2 Extracting [==================================================>] 5.312kB/5.312kB 6cf350721225 Extracting [==========================> ] 52.36MB/98.32MB 519f42193ec8 Downloading [===============================================> ] 115.2MB/121.9MB 98acab318002 Verifying Checksum 98acab318002 Download complete a40760cd2625 Extracting [===========================> ] 46.79MB/84.46MB 8e70b9b9b078 Downloading [================================================> ] 262.2MB/272.7MB 8e70b9b9b078 Downloading [================================================> ] 262.2MB/272.7MB 519f42193ec8 Verifying Checksum 519f42193ec8 Download complete a721db3e3f3d Extracting [========> ] 983kB/5.526MB ad1782e4d1ef Extracting [=============================> ] 107.5MB/180.4MB 5df3538dc51e Downloading [=========> ] 719B/3.627kB 5df3538dc51e Downloading [==================================================>] 3.627kB/3.627kB 5df3538dc51e Verifying Checksum 5df3538dc51e Download complete 6cf350721225 Extracting [================================> ] 62.95MB/98.32MB 8e70b9b9b078 Verifying Checksum 8e70b9b9b078 Verifying Checksum 8e70b9b9b078 Download complete 8e70b9b9b078 Download complete a40760cd2625 Extracting [================================> ] 55.71MB/84.46MB ad1782e4d1ef Extracting [==============================> ] 109.7MB/180.4MB a721db3e3f3d Extracting [=====================================> ] 4.194MB/5.526MB ee69cc1a77e2 Pull complete 81667b400b57 Extracting [==================================================>] 1.034kB/1.034kB 81667b400b57 Extracting [==================================================>] 1.034kB/1.034kB 6cf350721225 Extracting [==================================> ] 68.52MB/98.32MB 8e70b9b9b078 Extracting [> ] 557.1kB/272.7MB 8e70b9b9b078 Extracting [> ] 557.1kB/272.7MB a40760cd2625 Extracting [====================================> ] 62.39MB/84.46MB a721db3e3f3d Extracting [=========================================> ] 4.588MB/5.526MB ad1782e4d1ef Extracting [===============================> ] 112MB/180.4MB 8e70b9b9b078 Extracting [==> ] 12.81MB/272.7MB 8e70b9b9b078 Extracting [==> ] 12.81MB/272.7MB a40760cd2625 Extracting [========================================> ] 68.52MB/84.46MB 6cf350721225 Extracting [======================================> ] 75.2MB/98.32MB 81667b400b57 Pull complete ec3b6d0cc414 Extracting [==================================================>] 1.036kB/1.036kB ec3b6d0cc414 Extracting [==================================================>] 1.036kB/1.036kB ad1782e4d1ef Extracting [===============================> ] 115.3MB/180.4MB a721db3e3f3d Extracting [==========================================> ] 4.719MB/5.526MB 8e70b9b9b078 Extracting [===> ] 17.27MB/272.7MB 8e70b9b9b078 Extracting [===> ] 17.27MB/272.7MB 6cf350721225 Extracting [===========================================> ] 85.23MB/98.32MB a40760cd2625 Extracting [=============================================> ] 76.32MB/84.46MB ad1782e4d1ef Extracting [================================> ] 118.1MB/180.4MB ec3b6d0cc414 Pull complete a40760cd2625 Extracting [==================================================>] 84.46MB/84.46MB 8e70b9b9b078 Extracting [====> ] 26.18MB/272.7MB 8e70b9b9b078 Extracting [====> ] 26.18MB/272.7MB 6cf350721225 Extracting [===============================================> ] 93.03MB/98.32MB a721db3e3f3d Extracting [==================================================>] 5.526MB/5.526MB a8d3998ab21c Extracting [==================================================>] 13.9kB/13.9kB a8d3998ab21c Extracting [==================================================>] 13.9kB/13.9kB a40760cd2625 Pull complete 114f99593bd8 Extracting [==================================================>] 1.119kB/1.119kB 114f99593bd8 Extracting [==================================================>] 1.119kB/1.119kB a721db3e3f3d Pull complete ad1782e4d1ef Extracting [=================================> ] 119.8MB/180.4MB 1850a929b84a Extracting [==================================================>] 149B/149B 1850a929b84a Extracting [==================================================>] 149B/149B 6cf350721225 Extracting [==================================================>] 98.32MB/98.32MB 8e70b9b9b078 Extracting [=====> ] 30.08MB/272.7MB 8e70b9b9b078 Extracting [=====> ] 30.08MB/272.7MB ad1782e4d1ef Extracting [=================================> ] 122MB/180.4MB 8e70b9b9b078 Extracting [=====> ] 30.64MB/272.7MB 8e70b9b9b078 Extracting [=====> ] 30.64MB/272.7MB 6cf350721225 Pull complete a8d3998ab21c Pull complete 114f99593bd8 Pull complete 1850a929b84a Pull complete de723b4c7ed9 Extracting [==================================================>] 1.297kB/1.297kB de723b4c7ed9 Extracting [==================================================>] 1.297kB/1.297kB 397a918c7da3 Extracting [==================================================>] 327B/327B 89d6e2ec6372 Extracting [==================================================>] 13.79kB/13.79kB 89d6e2ec6372 Extracting [==================================================>] 13.79kB/13.79kB api Pulled ad1782e4d1ef Extracting [===================================> ] 126.5MB/180.4MB 8e70b9b9b078 Extracting [========> ] 44.56MB/272.7MB 8e70b9b9b078 Extracting [========> ] 44.56MB/272.7MB de723b4c7ed9 Pull complete 397a918c7da3 Pull complete pap Pulled ad1782e4d1ef Extracting [====================================> ] 130.4MB/180.4MB 8e70b9b9b078 Extracting [==========> ] 56.26MB/272.7MB 8e70b9b9b078 Extracting [==========> ] 56.26MB/272.7MB 89d6e2ec6372 Pull complete 80096f8bb25e Extracting [==================================================>] 2.238kB/2.238kB 80096f8bb25e Extracting [==================================================>] 2.238kB/2.238kB 806be17e856d Extracting [> ] 557.1kB/89.72MB ad1782e4d1ef Extracting [=====================================> ] 135.4MB/180.4MB 8e70b9b9b078 Extracting [============> ] 66.85MB/272.7MB 8e70b9b9b078 Extracting [============> ] 66.85MB/272.7MB 80096f8bb25e Pull complete cbd359ebc87d Extracting [==================================================>] 2.23kB/2.23kB cbd359ebc87d Extracting [==================================================>] 2.23kB/2.23kB 806be17e856d Extracting [=> ] 3.342MB/89.72MB ad1782e4d1ef Extracting [======================================> ] 139.3MB/180.4MB 8e70b9b9b078 Extracting [==============> ] 80.22MB/272.7MB 8e70b9b9b078 Extracting [==============> ] 80.22MB/272.7MB 806be17e856d Extracting [====> ] 7.799MB/89.72MB ad1782e4d1ef Extracting [=======================================> ] 144.3MB/180.4MB 8e70b9b9b078 Extracting [================> ] 92.47MB/272.7MB 8e70b9b9b078 Extracting [================> ] 92.47MB/272.7MB cbd359ebc87d Pull complete policy-db-migrator Pulled 806be17e856d Extracting [======> ] 12.26MB/89.72MB ad1782e4d1ef Extracting [=========================================> ] 149.3MB/180.4MB 8e70b9b9b078 Extracting [==================> ] 103.1MB/272.7MB 8e70b9b9b078 Extracting [==================> ] 103.1MB/272.7MB 806be17e856d Extracting [=========> ] 17.83MB/89.72MB ad1782e4d1ef Extracting [==========================================> ] 154.3MB/180.4MB 8e70b9b9b078 Extracting [====================> ] 110.9MB/272.7MB 8e70b9b9b078 Extracting [====================> ] 110.9MB/272.7MB 806be17e856d Extracting [============> ] 22.84MB/89.72MB ad1782e4d1ef Extracting [============================================> ] 160.4MB/180.4MB 8e70b9b9b078 Extracting [=====================> ] 114.8MB/272.7MB 8e70b9b9b078 Extracting [=====================> ] 114.8MB/272.7MB 8e70b9b9b078 Extracting [=====================> ] 118.7MB/272.7MB 8e70b9b9b078 Extracting [=====================> ] 118.7MB/272.7MB 806be17e856d Extracting [==============> ] 26.18MB/89.72MB ad1782e4d1ef Extracting [=============================================> ] 164.3MB/180.4MB 8e70b9b9b078 Extracting [======================> ] 123.7MB/272.7MB 8e70b9b9b078 Extracting [======================> ] 123.7MB/272.7MB ad1782e4d1ef Extracting [==============================================> ] 168.8MB/180.4MB 806be17e856d Extracting [================> ] 29.52MB/89.72MB 8e70b9b9b078 Extracting [=======================> ] 128.7MB/272.7MB 8e70b9b9b078 Extracting [=======================> ] 128.7MB/272.7MB ad1782e4d1ef Extracting [===============================================> ] 171.6MB/180.4MB 806be17e856d Extracting [==================> ] 32.87MB/89.72MB 8e70b9b9b078 Extracting [========================> ] 134.8MB/272.7MB 8e70b9b9b078 Extracting [========================> ] 134.8MB/272.7MB 806be17e856d Extracting [===================> ] 35.65MB/89.72MB ad1782e4d1ef Extracting [================================================> ] 173.8MB/180.4MB 8e70b9b9b078 Extracting [=========================> ] 138.1MB/272.7MB 8e70b9b9b078 Extracting [=========================> ] 138.1MB/272.7MB 806be17e856d Extracting [======================> ] 39.55MB/89.72MB ad1782e4d1ef Extracting [================================================> ] 176.6MB/180.4MB 8e70b9b9b078 Extracting [==========================> ] 143.7MB/272.7MB 8e70b9b9b078 Extracting [==========================> ] 143.7MB/272.7MB 806be17e856d Extracting [=======================> ] 42.89MB/89.72MB ad1782e4d1ef Extracting [=================================================> ] 178.8MB/180.4MB 8e70b9b9b078 Extracting [===========================> ] 148.7MB/272.7MB 8e70b9b9b078 Extracting [===========================> ] 148.7MB/272.7MB 806be17e856d Extracting [==========================> ] 46.79MB/89.72MB ad1782e4d1ef Extracting [==================================================>] 180.4MB/180.4MB 8e70b9b9b078 Extracting [============================> ] 153.7MB/272.7MB 8e70b9b9b078 Extracting [============================> ] 153.7MB/272.7MB 806be17e856d Extracting [=============================> ] 52.36MB/89.72MB ad1782e4d1ef Pull complete 8e70b9b9b078 Extracting [============================> ] 157.1MB/272.7MB 8e70b9b9b078 Extracting [============================> ] 157.1MB/272.7MB bc8105c6553b Extracting [===================> ] 32.77kB/84.13kB bc8105c6553b Extracting [==================================================>] 84.13kB/84.13kB 806be17e856d Extracting [===============================> ] 56.82MB/89.72MB 8e70b9b9b078 Extracting [=============================> ] 161MB/272.7MB 8e70b9b9b078 Extracting [=============================> ] 161MB/272.7MB 806be17e856d Extracting [==================================> ] 61.28MB/89.72MB bc8105c6553b Pull complete 929241f867bb Extracting [==================================================>] 92B/92B 929241f867bb Extracting [==================================================>] 92B/92B 8e70b9b9b078 Extracting [==============================> ] 164.9MB/272.7MB 8e70b9b9b078 Extracting [==============================> ] 164.9MB/272.7MB 806be17e856d Extracting [=====================================> ] 66.85MB/89.72MB 929241f867bb Pull complete 37728a7352e6 Extracting [==================================================>] 92B/92B 37728a7352e6 Extracting [==================================================>] 92B/92B 8e70b9b9b078 Extracting [===============================> ] 170.5MB/272.7MB 8e70b9b9b078 Extracting [===============================> ] 170.5MB/272.7MB 806be17e856d Extracting [======================================> ] 69.07MB/89.72MB 37728a7352e6 Pull complete 8e70b9b9b078 Extracting [================================> ] 176MB/272.7MB 8e70b9b9b078 Extracting [================================> ] 176MB/272.7MB 806be17e856d Extracting [========================================> ] 72.42MB/89.72MB 3682c4012c1d Extracting [> ] 491.5kB/47.99MB 8e70b9b9b078 Extracting [================================> ] 179.9MB/272.7MB 8e70b9b9b078 Extracting [================================> ] 179.9MB/272.7MB 806be17e856d Extracting [=========================================> ] 75.2MB/89.72MB 3682c4012c1d Extracting [===> ] 2.949MB/47.99MB 8e70b9b9b078 Extracting [=================================> ] 181.6MB/272.7MB 8e70b9b9b078 Extracting [=================================> ] 181.6MB/272.7MB 3682c4012c1d Extracting [=====> ] 5.407MB/47.99MB 806be17e856d Extracting [=============================================> ] 81.33MB/89.72MB 8e70b9b9b078 Extracting [=================================> ] 183.3MB/272.7MB 8e70b9b9b078 Extracting [=================================> ] 183.3MB/272.7MB 3682c4012c1d Extracting [========> ] 7.864MB/47.99MB 806be17e856d Extracting [==============================================> ] 84.12MB/89.72MB 3682c4012c1d Extracting [============> ] 11.8MB/47.99MB 8e70b9b9b078 Extracting [=================================> ] 184.4MB/272.7MB 8e70b9b9b078 Extracting [=================================> ] 184.4MB/272.7MB 806be17e856d Extracting [================================================> ] 86.34MB/89.72MB 3682c4012c1d Extracting [===============> ] 15.24MB/47.99MB 8e70b9b9b078 Extracting [==================================> ] 186.1MB/272.7MB 8e70b9b9b078 Extracting [==================================> ] 186.1MB/272.7MB 806be17e856d Extracting [=================================================> ] 88.57MB/89.72MB 8e70b9b9b078 Extracting [==================================> ] 187.2MB/272.7MB 8e70b9b9b078 Extracting [==================================> ] 187.2MB/272.7MB 3682c4012c1d Extracting [=================> ] 17.2MB/47.99MB 806be17e856d Extracting [=================================================> ] 89.13MB/89.72MB 8e70b9b9b078 Extracting [==================================> ] 189.4MB/272.7MB 8e70b9b9b078 Extracting [==================================> ] 189.4MB/272.7MB 3682c4012c1d Extracting [==================> ] 18.19MB/47.99MB 806be17e856d Extracting [==================================================>] 89.72MB/89.72MB 8e70b9b9b078 Extracting [===================================> ] 194.4MB/272.7MB 8e70b9b9b078 Extracting [===================================> ] 194.4MB/272.7MB 8e70b9b9b078 Extracting [====================================> ] 197.2MB/272.7MB 8e70b9b9b078 Extracting [====================================> ] 197.2MB/272.7MB 3682c4012c1d Extracting [====================> ] 20.15MB/47.99MB 8e70b9b9b078 Extracting [=====================================> ] 202.2MB/272.7MB 8e70b9b9b078 Extracting [=====================================> ] 202.2MB/272.7MB 3682c4012c1d Extracting [=======================> ] 22.12MB/47.99MB 806be17e856d Pull complete 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 3682c4012c1d Extracting [=========================> ] 24.58MB/47.99MB 8e70b9b9b078 Extracting [=====================================> ] 204.4MB/272.7MB 8e70b9b9b078 Extracting [=====================================> ] 204.4MB/272.7MB 3682c4012c1d Extracting [===================================> ] 33.91MB/47.99MB 8e70b9b9b078 Extracting [=====================================> ] 205.6MB/272.7MB 8e70b9b9b078 Extracting [=====================================> ] 205.6MB/272.7MB 634de6c90876 Pull complete cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB 3682c4012c1d Extracting [====================================> ] 35.39MB/47.99MB 8e70b9b9b078 Extracting [======================================> ] 208.3MB/272.7MB 8e70b9b9b078 Extracting [======================================> ] 208.3MB/272.7MB cd00854cfb1a Pull complete 3682c4012c1d Extracting [======================================> ] 36.86MB/47.99MB mariadb Pulled 8e70b9b9b078 Extracting [======================================> ] 209.5MB/272.7MB 8e70b9b9b078 Extracting [======================================> ] 209.5MB/272.7MB 3682c4012c1d Extracting [========================================> ] 38.83MB/47.99MB 8e70b9b9b078 Extracting [=======================================> ] 212.8MB/272.7MB 8e70b9b9b078 Extracting [=======================================> ] 212.8MB/272.7MB 3682c4012c1d Extracting [===========================================> ] 41.29MB/47.99MB 8e70b9b9b078 Extracting [=======================================> ] 215.6MB/272.7MB 8e70b9b9b078 Extracting [=======================================> ] 215.6MB/272.7MB 3682c4012c1d Extracting [=============================================> ] 43.75MB/47.99MB 8e70b9b9b078 Extracting [=======================================> ] 217.3MB/272.7MB 8e70b9b9b078 Extracting [=======================================> ] 217.3MB/272.7MB 3682c4012c1d Extracting [================================================> ] 46.2MB/47.99MB 3682c4012c1d Extracting [==================================================>] 47.99MB/47.99MB 8e70b9b9b078 Extracting [========================================> ] 220MB/272.7MB 8e70b9b9b078 Extracting [========================================> ] 220MB/272.7MB 3682c4012c1d Pull complete 3ddad7f6e85c Extracting [==================================================>] 373B/373B 3ddad7f6e85c Extracting [==================================================>] 373B/373B 8e70b9b9b078 Extracting [========================================> ] 221.2MB/272.7MB 8e70b9b9b078 Extracting [========================================> ] 221.2MB/272.7MB 3ddad7f6e85c Pull complete 8e70b9b9b078 Extracting [========================================> ] 223.4MB/272.7MB 8e70b9b9b078 Extracting [========================================> ] 223.4MB/272.7MB 7162b4201f77 Extracting [> ] 557.1kB/96.48MB 8e70b9b9b078 Extracting [=========================================> ] 227.3MB/272.7MB 8e70b9b9b078 Extracting [=========================================> ] 227.3MB/272.7MB 7162b4201f77 Extracting [=========> ] 18.38MB/96.48MB 8e70b9b9b078 Extracting [==========================================> ] 231.2MB/272.7MB 8e70b9b9b078 Extracting [==========================================> ] 231.2MB/272.7MB 7162b4201f77 Extracting [==================> ] 35.09MB/96.48MB 8e70b9b9b078 Extracting [===========================================> ] 235.1MB/272.7MB 8e70b9b9b078 Extracting [===========================================> ] 235.1MB/272.7MB 7162b4201f77 Extracting [===========================> ] 52.36MB/96.48MB 8e70b9b9b078 Extracting [===========================================> ] 237.3MB/272.7MB 8e70b9b9b078 Extracting [===========================================> ] 237.3MB/272.7MB 7162b4201f77 Extracting [==================================> ] 67.4MB/96.48MB 7162b4201f77 Extracting [============================================> ] 85.23MB/96.48MB 8e70b9b9b078 Extracting [============================================> ] 240.1MB/272.7MB 8e70b9b9b078 Extracting [============================================> ] 240.1MB/272.7MB 7162b4201f77 Extracting [==================================================>] 96.48MB/96.48MB 8e70b9b9b078 Extracting [============================================> ] 241.8MB/272.7MB 8e70b9b9b078 Extracting [============================================> ] 241.8MB/272.7MB 7162b4201f77 Pull complete 8e70b9b9b078 Extracting [=============================================> ] 246.2MB/272.7MB 8e70b9b9b078 Extracting [=============================================> ] 246.2MB/272.7MB e4ef9fa2caeb Extracting [> ] 557.1kB/97.42MB 8e70b9b9b078 Extracting [=============================================> ] 249.6MB/272.7MB 8e70b9b9b078 Extracting [=============================================> ] 249.6MB/272.7MB e4ef9fa2caeb Extracting [======> ] 12.81MB/97.42MB 8e70b9b9b078 Extracting [==============================================> ] 251.2MB/272.7MB 8e70b9b9b078 Extracting [==============================================> ] 251.2MB/272.7MB e4ef9fa2caeb Extracting [=============> ] 26.74MB/97.42MB 8e70b9b9b078 Extracting [==============================================> ] 256.2MB/272.7MB 8e70b9b9b078 Extracting [==============================================> ] 256.2MB/272.7MB e4ef9fa2caeb Extracting [====================> ] 38.99MB/97.42MB e4ef9fa2caeb Extracting [==========================> ] 52.36MB/97.42MB 8e70b9b9b078 Extracting [=================================================> ] 267.9MB/272.7MB 8e70b9b9b078 Extracting [=================================================> ] 267.9MB/272.7MB e4ef9fa2caeb Extracting [==================================> ] 66.29MB/97.42MB 8e70b9b9b078 Extracting [=================================================> ] 272.4MB/272.7MB 8e70b9b9b078 Extracting [=================================================> ] 272.4MB/272.7MB 8e70b9b9b078 Extracting [==================================================>] 272.7MB/272.7MB 8e70b9b9b078 Extracting [==================================================>] 272.7MB/272.7MB e4ef9fa2caeb Extracting [======================================> ] 74.09MB/97.42MB 8e70b9b9b078 Pull complete 8e70b9b9b078 Pull complete 732c9ebb730c Extracting [==================================================>] 1.111kB/1.111kB 732c9ebb730c Extracting [==================================================>] 1.111kB/1.111kB 732c9ebb730c Extracting [==================================================>] 1.111kB/1.111kB 732c9ebb730c Extracting [==================================================>] 1.111kB/1.111kB e4ef9fa2caeb Extracting [==========================================> ] 83MB/97.42MB e4ef9fa2caeb Extracting [==================================================>] 97.42MB/97.42MB 732c9ebb730c Pull complete 732c9ebb730c Pull complete ed746366f1b8 Extracting [> ] 98.3kB/8.378MB ed746366f1b8 Extracting [> ] 98.3kB/8.378MB e4ef9fa2caeb Pull complete drools-pdp Pulled ed746366f1b8 Extracting [==============================> ] 5.112MB/8.378MB ed746366f1b8 Extracting [==============================> ] 5.112MB/8.378MB ed746366f1b8 Extracting [==================================================>] 8.378MB/8.378MB ed746366f1b8 Extracting [==================================================>] 8.378MB/8.378MB ed746366f1b8 Pull complete ed746366f1b8 Pull complete 10894799ccd9 Extracting [==================================================>] 21.28kB/21.28kB 10894799ccd9 Extracting [==================================================>] 21.28kB/21.28kB 10894799ccd9 Extracting [==================================================>] 21.28kB/21.28kB 10894799ccd9 Extracting [==================================================>] 21.28kB/21.28kB 10894799ccd9 Pull complete 10894799ccd9 Pull complete 8d377259558c Extracting [> ] 458.8kB/43.24MB 8d377259558c Extracting [> ] 458.8kB/43.24MB 8d377259558c Extracting [====================> ] 17.89MB/43.24MB 8d377259558c Extracting [====================> ] 17.89MB/43.24MB 8d377259558c Extracting [==========================================> ] 36.7MB/43.24MB 8d377259558c Extracting [==========================================> ] 36.7MB/43.24MB 8d377259558c Extracting [==================================================>] 43.24MB/43.24MB 8d377259558c Extracting [==================================================>] 43.24MB/43.24MB 8d377259558c Pull complete 8d377259558c Pull complete e7688095d1e6 Extracting [==================================================>] 1.106kB/1.106kB e7688095d1e6 Extracting [==================================================>] 1.106kB/1.106kB e7688095d1e6 Extracting [==================================================>] 1.106kB/1.106kB e7688095d1e6 Extracting [==================================================>] 1.106kB/1.106kB e7688095d1e6 Pull complete e7688095d1e6 Pull complete 8eab815b3593 Extracting [==================================================>] 853B/853B 8eab815b3593 Extracting [==================================================>] 853B/853B 8eab815b3593 Extracting [==================================================>] 853B/853B 8eab815b3593 Extracting [==================================================>] 853B/853B 8eab815b3593 Pull complete 8eab815b3593 Pull complete 00ded6dd259e Extracting [==================================================>] 98B/98B 00ded6dd259e Extracting [==================================================>] 98B/98B 00ded6dd259e Extracting [==================================================>] 98B/98B 00ded6dd259e Extracting [==================================================>] 98B/98B 00ded6dd259e Pull complete 00ded6dd259e Pull complete 296f622c8150 Extracting [==================================================>] 172B/172B 296f622c8150 Extracting [==================================================>] 172B/172B 296f622c8150 Extracting [==================================================>] 172B/172B 296f622c8150 Extracting [==================================================>] 172B/172B 296f622c8150 Pull complete 296f622c8150 Pull complete 4ee3050cff6b Extracting [=======> ] 32.77kB/230.6kB 4ee3050cff6b Extracting [=======> ] 32.77kB/230.6kB 4ee3050cff6b Extracting [==================================================>] 230.6kB/230.6kB 4ee3050cff6b Extracting [==================================================>] 230.6kB/230.6kB 4ee3050cff6b Pull complete 4ee3050cff6b Pull complete 519f42193ec8 Extracting [> ] 557.1kB/121.9MB 98acab318002 Extracting [> ] 557.1kB/121.9MB 519f42193ec8 Extracting [=====> ] 12.81MB/121.9MB 98acab318002 Extracting [======> ] 15.04MB/121.9MB 519f42193ec8 Extracting [===========> ] 28.41MB/121.9MB 98acab318002 Extracting [==========> ] 25.07MB/121.9MB 519f42193ec8 Extracting [==================> ] 45.12MB/121.9MB 98acab318002 Extracting [==============> ] 36.21MB/121.9MB 519f42193ec8 Extracting [==========================> ] 63.5MB/121.9MB 98acab318002 Extracting [======================> ] 54.03MB/121.9MB 519f42193ec8 Extracting [================================> ] 79.66MB/121.9MB 98acab318002 Extracting [=============================> ] 72.97MB/121.9MB 519f42193ec8 Extracting [=======================================> ] 96.93MB/121.9MB 98acab318002 Extracting [=====================================> ] 92.47MB/121.9MB 519f42193ec8 Extracting [=============================================> ] 110.9MB/121.9MB 98acab318002 Extracting [=============================================> ] 111.4MB/121.9MB 519f42193ec8 Extracting [================================================> ] 119.2MB/121.9MB 98acab318002 Extracting [=================================================> ] 119.8MB/121.9MB 98acab318002 Extracting [==================================================>] 121.9MB/121.9MB 519f42193ec8 Extracting [==================================================>] 121.9MB/121.9MB 98acab318002 Pull complete 519f42193ec8 Pull complete 878348106a95 Extracting [==================================================>] 3.447kB/3.447kB 878348106a95 Extracting [==================================================>] 3.447kB/3.447kB 5df3538dc51e Extracting [==================================================>] 3.627kB/3.627kB 5df3538dc51e Extracting [==================================================>] 3.627kB/3.627kB 878348106a95 Pull complete 5df3538dc51e Pull complete kafka Pulled zookeeper Pulled Network compose_default Creating Network compose_default Created Container zookeeper Creating Container mariadb Creating Container mariadb Created Container zookeeper Created Container kafka Creating Container policy-db-migrator Creating Container policy-db-migrator Created Container policy-api Creating Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-drools-pdp Creating Container policy-drools-pdp Created Container zookeeper Starting Container mariadb Starting Container zookeeper Started Container kafka Starting Container kafka Started Container mariadb Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container policy-api Started Container policy-pap Starting Container policy-pap Started Container policy-drools-pdp Starting Container policy-drools-pdp Started Waiting for REST to come up on localhost port 30216... NAMES STATUS policy-drools-pdp Up 30 seconds policy-pap Up 30 seconds policy-api Up 31 seconds kafka Up 33 seconds zookeeper Up 34 seconds mariadb Up 33 seconds Build docker image for robot framework Error: No such image: policy-csit-robot Cloning into '/w/workspace/policy-drools-pdp-newdelhi-project-csit-drools-pdp/csit/resources/tests/models'... Build robot framework docker image Sending build context to Docker daemon 16.14MB Step 1/9 : FROM nexus3.onap.org:10001/library/python:3.10-slim-bullseye 3.10-slim-bullseye: Pulling from library/python fa0650a893c2: Pulling fs layer c11bc7b0e3f4: Pulling fs layer 7bbbc6da0c4e: Pulling fs layer f988c113d3f9: Pulling fs layer f988c113d3f9: Waiting c11bc7b0e3f4: Download complete f988c113d3f9: Verifying Checksum f988c113d3f9: Download complete 7bbbc6da0c4e: Verifying Checksum 7bbbc6da0c4e: Download complete fa0650a893c2: Verifying Checksum fa0650a893c2: Download complete fa0650a893c2: Pull complete c11bc7b0e3f4: Pull complete 7bbbc6da0c4e: Pull complete f988c113d3f9: Pull complete Digest: sha256:2d77082df40974487cce4c6e82fd84508b5798b74614ffff08030b809fff30dd Status: Downloaded newer image for nexus3.onap.org:10001/library/python:3.10-slim-bullseye ---> 22d1c3b2c9f7 Step 2/9 : ARG CSIT_SCRIPT=${CSIT_SCRIPT} ---> Running in 7d8eaadfde29 Removing intermediate container 7d8eaadfde29 ---> f763ab46c659 Step 3/9 : ARG ROBOT_FILE=${ROBOT_FILE} ---> Running in 3560e7729f38 Removing intermediate container 3560e7729f38 ---> a266a6c001a8 Step 4/9 : ENV ROBOT_WORKSPACE=/opt/robotworkspace ROBOT_FILE=$ROBOT_FILE CLAMP_K8S_TEST=$CLAMP_K8S_TEST ---> Running in c039ed514a79 Removing intermediate container c039ed514a79 ---> dc50494b7cfa Step 5/9 : RUN python3 -m pip -qq install --upgrade pip && python3 -m pip -qq install --upgrade --extra-index-url="https://nexus3.onap.org/repository/PyPi.staging/simple" 'robotframework-onap==0.6.0.*' --pre && python3 -m pip -qq install --upgrade confluent-kafka && python3 -m pip freeze ---> Running in 9f03deda4519 bcrypt==4.2.0 certifi==2024.8.30 cffi==1.17.1 charset-normalizer==3.3.2 confluent-kafka==2.5.3 cryptography==43.0.1 decorator==5.1.1 deepdiff==8.0.1 dnspython==2.7.0rc1 future==1.0.0 idna==3.10 Jinja2==3.1.4 jsonpath-rw==1.4.0 kafka-python==2.0.2 MarkupSafe==2.1.5 more-itertools==5.0.0 orderly-set==5.2.2 paramiko==3.5.0 pbr==6.1.0 ply==3.11 protobuf==5.29.0rc1 pycparser==2.22 PyNaCl==1.5.0 PyYAML==6.0.2 requests==2.32.3 robotframework==7.1 robotframework-onap==0.6.0.dev105 robotframework-requests==1.0a11 robotlibcore-temp==1.0.2 six==1.16.0 urllib3==2.2.3 Removing intermediate container 9f03deda4519 ---> 444bfe5a71c0 Step 6/9 : RUN mkdir -p ${ROBOT_WORKSPACE} ---> Running in d0ef44375d41 Removing intermediate container d0ef44375d41 ---> cc9a417811d3 Step 7/9 : COPY scripts/run-test.sh tests/ ${ROBOT_WORKSPACE}/ ---> 5dccf95a9b6a Step 8/9 : WORKDIR ${ROBOT_WORKSPACE} ---> Running in 7794e04c2402 Removing intermediate container 7794e04c2402 ---> 5fe1f417f923 Step 9/9 : CMD ["sh", "-c", "./run-test.sh" ] ---> Running in 321e5a3e53db Removing intermediate container 321e5a3e53db ---> f3ca4c11932a Successfully built f3ca4c11932a Successfully tagged policy-csit-robot:latest top - 10:46:13 up 3 min, 0 users, load average: 2.47, 1.21, 0.47 Tasks: 196 total, 1 running, 119 sleeping, 0 stopped, 0 zombie %Cpu(s): 14.6 us, 3.1 sy, 0.0 ni, 77.4 id, 4.8 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.5G 23G 1.1M 5.3G 28G Swap: 1.0G 0B 1.0G NAMES STATUS policy-drools-pdp Up 56 seconds policy-pap Up 56 seconds policy-api Up 57 seconds kafka Up 59 seconds zookeeper Up About a minute mariadb Up 59 seconds CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS f5cbe62ad75e policy-drools-pdp 0.43% 225.2MiB / 31.41GiB 0.70% 28.1kB / 35kB 0B / 8.19kB 53 df169345686d policy-pap 1.60% 511.2MiB / 31.41GiB 1.59% 38.4kB / 41.9kB 0B / 149MB 63 6db472cc5f45 policy-api 0.09% 470.7MiB / 31.41GiB 1.46% 988kB / 647kB 0B / 0B 53 c69dd0d2ee01 kafka 4.26% 372.3MiB / 31.41GiB 1.16% 107kB / 99.8kB 0B / 549kB 87 4ffa609d329a zookeeper 0.07% 85.77MiB / 31.41GiB 0.27% 51.8kB / 46.5kB 229kB / 389kB 62 3d7176116c00 mariadb 0.02% 102.6MiB / 31.41GiB 0.32% 936kB / 1.18MB 10.9MB / 71.8MB 39 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: drools-pdp-test.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v CLAMP_K8S_TEST: policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Drools-Pdp-Test policy-csit | ============================================================================== policy-csit | Alive :: Runs Policy PDP Alive Check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify drools-pdp is exporting metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Drools-Pdp-Test | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 NAMES STATUS policy-drools-pdp Up About a minute policy-pap Up About a minute policy-api Up About a minute kafka Up About a minute zookeeper Up About a minute mariadb Up About a minute Shut down started! Collecting logs from docker compose containers... ======== Logs from kafka ======== kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-10-05 10:45:17,615] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,615] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,615] INFO Client environment:java.version=17.0.12 (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,615] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,616] INFO Client environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,616] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/kafka-raft-7.7.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/utility-belt-7.7.1-30.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/common-utils-7.7.1.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/jackson-core-2.16.0.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/kafka_2.13-7.7.1-ccs.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.16.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.7.1-ccs.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-databind-2.16.0.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-7.7.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.16.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.7.1-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.16.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-server-common-7.7.1-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.16.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jackson-annotations-2.16.0.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.6-4.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.7.1-ccs.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.7.1-ccs.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.7.1-ccs.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.7.1.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar:/usr/share/java/cp-base-new/kafka-metadata-7.7.1-ccs.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,616] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,616] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,616] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,616] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,616] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,616] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,616] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,616] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,616] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,616] INFO Client environment:os.memory.free=500MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,616] INFO Client environment:os.memory.max=8044MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,616] INFO Client environment:os.memory.total=512MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,619] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@43a25848 (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,621] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-10-05 10:45:17,625] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-10-05 10:45:17,631] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-10-05 10:45:17,641] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-10-05 10:45:17,641] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-10-05 10:45:17,649] INFO Socket connection established, initiating session, client: /172.17.0.4:56996, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-10-05 10:45:17,840] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x10000024b210000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-10-05 10:45:17,954] INFO Session: 0x10000024b210000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:17,954] INFO EventThread shut down for session: 0x10000024b210000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-10-05 10:45:18,526] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-10-05 10:45:18,727] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-10-05 10:45:18,793] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-10-05 10:45:18,794] INFO starting (kafka.server.KafkaServer) kafka | [2024-10-05 10:45:18,794] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-10-05 10:45:18,805] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-10-05 10:45:18,808] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,808] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,808] INFO Client environment:java.version=17.0.12 (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,808] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,808] INFO Client environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,808] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/protobuf-java-3.23.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-raft-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-runtime-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/connect-json-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/netty-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-server-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-clients-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/scala-library-2.13.12.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-shell-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.110.Final.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.110.Final.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.110.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.12.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/trogdor-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-tools-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.7.1-ccs.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,809] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,809] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,809] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,809] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,809] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,809] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,809] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,809] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,809] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,809] INFO Client environment:os.memory.free=986MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,809] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,809] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,811] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@609bcfb6 (org.apache.zookeeper.ZooKeeper) kafka | [2024-10-05 10:45:18,814] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-10-05 10:45:18,818] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-10-05 10:45:18,820] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-10-05 10:45:18,823] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-10-05 10:45:18,828] INFO Socket connection established, initiating session, client: /172.17.0.4:56998, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-10-05 10:45:18,838] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x10000024b210001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-10-05 10:45:18,843] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-10-05 10:45:19,108] INFO Cluster ID = E85j95xVQXmDZtJdQjKymw (kafka.server.KafkaServer) kafka | [2024-10-05 10:45:19,151] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | eligible.leader.replicas.enable = false kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.rebalance.protocols = [classic] kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.7-IV4 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.local.retention.bytes = -2 kafka | log.local.retention.ms = -2 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = rsm.config. kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | server.max.startup.time.ms = 9223372036854775807 kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.allow.dn.changes = false kafka | ssl.allow.san.changes = false kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | telemetry.max.bytes = 1048576 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.partition.verification.enable = true kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | unstable.api.versions.enable = false kafka | unstable.metadata.versions.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.metadata.migration.min.batch.size = 200 kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2024-10-05 10:45:19,178] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-10-05 10:45:19,178] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-10-05 10:45:19,179] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-10-05 10:45:19,181] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-10-05 10:45:19,186] INFO [KafkaServer id=1] Rewriting /var/lib/kafka/data/meta.properties (kafka.server.KafkaServer) kafka | [2024-10-05 10:45:19,250] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2024-10-05 10:45:19,254] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) kafka | [2024-10-05 10:45:19,264] INFO Loaded 0 logs in 14ms (kafka.log.LogManager) kafka | [2024-10-05 10:45:19,265] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2024-10-05 10:45:19,266] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2024-10-05 10:45:19,274] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2024-10-05 10:45:19,315] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) kafka | [2024-10-05 10:45:19,326] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2024-10-05 10:45:19,339] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-10-05 10:45:19,362] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) kafka | [2024-10-05 10:45:19,618] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-10-05 10:45:19,632] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2024-10-05 10:45:19,632] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-10-05 10:45:19,636] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2024-10-05 10:45:19,639] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) kafka | [2024-10-05 10:45:19,658] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-10-05 10:45:19,659] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-10-05 10:45:19,663] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-10-05 10:45:19,663] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-10-05 10:45:19,666] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-10-05 10:45:19,678] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2024-10-05 10:45:19,679] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) kafka | [2024-10-05 10:45:19,703] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2024-10-05 10:45:19,729] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1728125119716,1728125119716,1,0,0,72057603888316417,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2024-10-05 10:45:19,731] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2024-10-05 10:45:19,762] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2024-10-05 10:45:19,767] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-10-05 10:45:19,773] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-10-05 10:45:19,773] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-10-05 10:45:19,781] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2024-10-05 10:45:19,786] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:19,792] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,794] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:19,795] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,800] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-10-05 10:45:19,808] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-10-05 10:45:19,812] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2024-10-05 10:45:19,812] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-10-05 10:45:19,822] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.7-IV4, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2024-10-05 10:45:19,823] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,831] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,835] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,845] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,851] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-10-05 10:45:19,862] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,866] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,871] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2024-10-05 10:45:19,873] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2024-10-05 10:45:19,878] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2024-10-05 10:45:19,878] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,879] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,880] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,880] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,883] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,883] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,883] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,884] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2024-10-05 10:45:19,885] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2024-10-05 10:45:19,885] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,887] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2024-10-05 10:45:19,887] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2024-10-05 10:45:19,889] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2024-10-05 10:45:19,893] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2024-10-05 10:45:19,893] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-10-05 10:45:19,896] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-10-05 10:45:19,897] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2024-10-05 10:45:19,897] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2024-10-05 10:45:19,897] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2024-10-05 10:45:19,899] INFO Kafka version: 7.7.1-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-10-05 10:45:19,899] INFO Kafka commitId: 91d86f33092378c89731b4a9cf1ce5db831a2b07 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-10-05 10:45:19,899] INFO Kafka startTimeMs: 1728125119896 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-10-05 10:45:19,900] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2024-10-05 10:45:19,901] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2024-10-05 10:45:19,901] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,902] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2024-10-05 10:45:19,908] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,908] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,908] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,908] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,909] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,925] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:19,945] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new ZK controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread) kafka | [2024-10-05 10:45:19,972] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-10-05 10:45:19,973] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new ZK controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread) kafka | [2024-10-05 10:45:22,431] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-10-05 10:45:22,450] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:22,460] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:22,470] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-10-05 10:45:22,478] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(mekPLgqtRQ-f2ExCnsmm5w),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:22,479] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:22,482] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,482] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-10-05 10:45:22,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,486] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-10-05 10:45:22,503] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,505] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-10-05 10:45:22,506] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2024-10-05 10:45:22,507] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2024-10-05 10:45:22,508] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,508] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-10-05 10:45:22,513] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) kafka | [2024-10-05 10:45:22,513] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(1Tck1i8CShmDwFwEMylOqQ),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:22,514] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:22,514] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,516] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,516] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,516] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,516] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,516] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,516] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,516] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,516] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,516] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,516] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,516] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,516] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,516] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-10-05 10:45:22,516] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-10-05 10:45:22,517] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,517] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,517] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,517] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,517] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,517] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,517] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,518] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,519] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,519] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,519] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,519] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,519] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,519] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,519] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,519] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,519] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,519] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,519] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,519] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-10-05 10:45:22,519] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-10-05 10:45:22,527] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-10-05 10:45:22,528] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) kafka | [2024-10-05 10:45:22,528] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2024-10-05 10:45:22,598] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,609] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,611] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,612] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,614] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(mekPLgqtRQ-f2ExCnsmm5w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,623] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-10-05 10:45:22,632] INFO [Broker id=1] Finished LeaderAndIsr request in 121ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) kafka | [2024-10-05 10:45:22,638] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=mekPLgqtRQ-f2ExCnsmm5w, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-10-05 10:45:22,643] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-10-05 10:45:22,644] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-10-05 10:45:22,645] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-10-05 10:45:22,648] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,648] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,648] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,648] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,648] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,648] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-10-05 10:45:22,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-10-05 10:45:22,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-10-05 10:45:22,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-10-05 10:45:22,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-10-05 10:45:22,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-10-05 10:45:22,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-10-05 10:45:22,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-10-05 10:45:22,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-10-05 10:45:22,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-10-05 10:45:22,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-10-05 10:45:22,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-10-05 10:45:22,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-10-05 10:45:22,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-10-05 10:45:22,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-10-05 10:45:22,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-10-05 10:45:22,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-10-05 10:45:22,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-10-05 10:45:22,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-10-05 10:45:22,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-10-05 10:45:22,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-10-05 10:45:22,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-10-05 10:45:22,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-10-05 10:45:22,653] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-10-05 10:45:22,653] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-10-05 10:45:22,653] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-10-05 10:45:22,653] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-10-05 10:45:22,653] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-10-05 10:45:22,653] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-10-05 10:45:22,653] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-10-05 10:45:22,653] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-10-05 10:45:22,653] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-10-05 10:45:22,653] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-10-05 10:45:22,653] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-10-05 10:45:22,653] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-10-05 10:45:22,653] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-10-05 10:45:22,653] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-10-05 10:45:22,654] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-10-05 10:45:22,654] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-10-05 10:45:22,654] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-10-05 10:45:22,654] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-10-05 10:45:22,654] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-10-05 10:45:22,654] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-10-05 10:45:22,654] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-10-05 10:45:22,654] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-10-05 10:45:22,654] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-10-05 10:45:22,654] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-10-05 10:45:22,654] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-10-05 10:45:22,654] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-10-05 10:45:22,654] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-10-05 10:45:22,654] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-10-05 10:45:22,654] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2024-10-05 10:45:22,655] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) kafka | [2024-10-05 10:45:22,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,658] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,661] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,661] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-10-05 10:45:22,661] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,661] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,661] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-10-05 10:45:22,661] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,661] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,661] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,661] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-10-05 10:45:22,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-10-05 10:45:22,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-10-05 10:45:22,675] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2024-10-05 10:45:22,675] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) kafka | [2024-10-05 10:45:22,683] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,683] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,683] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,683] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,684] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,694] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,695] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,695] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,695] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,695] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,702] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,702] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,702] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,703] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,703] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,732] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,732] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,732] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,732] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,732] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,740] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,741] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,741] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,741] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,742] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,747] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,748] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,748] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,748] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,748] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,760] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,761] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,761] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,761] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,761] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,768] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,768] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,769] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,769] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,769] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,775] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,775] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,775] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,775] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,775] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,784] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,784] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,784] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,785] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,785] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,792] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,793] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,793] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,793] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,793] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,804] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,805] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,805] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,805] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,805] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,813] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,813] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,814] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,814] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,814] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,819] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,820] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,820] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,820] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,820] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,828] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,829] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,829] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,829] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,829] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,870] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,871] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,871] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,871] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,871] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,880] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,880] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,881] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,881] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,881] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,887] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,888] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,888] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,888] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,888] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,896] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,896] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,897] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,897] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,897] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,903] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,903] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,903] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,903] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,904] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,911] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,911] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,911] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,912] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,912] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,918] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,919] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,919] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,919] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,919] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,926] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,926] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,927] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,927] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,927] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,936] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,937] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,937] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,937] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,937] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,943] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,943] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,943] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,943] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,944] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,955] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,956] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,956] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,956] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,956] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,962] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,963] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,963] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,963] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,963] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,970] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,970] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,970] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,970] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,971] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,977] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,978] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,978] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,978] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,978] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,985] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,985] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,985] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,985] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,985] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:22,994] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:22,995] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:22,995] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,995] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:22,995] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,004] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,005] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,005] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,005] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,005] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,011] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,011] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,011] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,011] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,011] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,017] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,018] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,018] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,018] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,018] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,024] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,024] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,024] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,024] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,024] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,033] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,034] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,034] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,034] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,034] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,047] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,047] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,047] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,048] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,048] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,054] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,054] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,054] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,054] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,054] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,061] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,061] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,061] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,061] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,061] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,069] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,069] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,069] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,069] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,069] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,075] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,076] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,076] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,076] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,076] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,083] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,083] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,083] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,083] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,083] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,088] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,088] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,089] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,089] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,089] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,095] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,095] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,095] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,095] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,095] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,104] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,105] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,105] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,105] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,106] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,113] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,115] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,115] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,115] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,115] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,120] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,121] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,121] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,121] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,121] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,129] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,131] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,131] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,131] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,132] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,141] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,142] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,142] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,142] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,142] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,186] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-10-05 10:45:23,186] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-10-05 10:45:23,186] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,187] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-10-05 10:45:23,187] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(1Tck1i8CShmDwFwEMylOqQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-10-05 10:45:23,190] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-10-05 10:45:23,190] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-10-05 10:45:23,191] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-10-05 10:45:23,192] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-10-05 10:45:23,192] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-10-05 10:45:23,192] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-10-05 10:45:23,192] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-10-05 10:45:23,192] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-10-05 10:45:23,192] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-10-05 10:45:23,192] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-10-05 10:45:23,192] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-10-05 10:45:23,192] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-10-05 10:45:23,192] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-10-05 10:45:23,192] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-10-05 10:45:23,192] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-10-05 10:45:23,192] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-10-05 10:45:23,192] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-10-05 10:45:23,192] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-10-05 10:45:23,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-10-05 10:45:23,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-10-05 10:45:23,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-10-05 10:45:23,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-10-05 10:45:23,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-10-05 10:45:23,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-10-05 10:45:23,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-10-05 10:45:23,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-10-05 10:45:23,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-10-05 10:45:23,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-10-05 10:45:23,194] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,195] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,197] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,197] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,197] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,197] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,197] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,197] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,197] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,198] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,198] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,198] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,198] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,198] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,198] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,198] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,198] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,198] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,198] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,198] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,198] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,198] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,198] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,198] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,198] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,198] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,198] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,198] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,198] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,198] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,198] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,198] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,198] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,198] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,198] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,199] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,199] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,199] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,199] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,199] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,199] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,199] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,199] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,199] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,199] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,199] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,199] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,199] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,199] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,199] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,199] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,199] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,199] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,199] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,199] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,199] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,199] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,199] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,199] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,199] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,199] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,199] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,199] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,199] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,199] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,199] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,199] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,199] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,199] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,201] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,201] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,201] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,201] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,201] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,201] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,201] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,201] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,201] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,202] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,202] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,202] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,202] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,202] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,202] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,202] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,202] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,202] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,202] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,203] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,203] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,203] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,203] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,203] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,203] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,203] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,203] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,203] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,203] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,203] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,203] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,203] INFO [Broker id=1] Finished LeaderAndIsr request in 545ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2024-10-05 10:45:23,205] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=1Tck1i8CShmDwFwEMylOqQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-10-05 10:45:23,207] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 8 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,207] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,207] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,207] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,207] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,207] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,207] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,207] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,207] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,207] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,207] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,207] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,207] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,207] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,208] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,210] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,210] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,210] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,210] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,210] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,210] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,210] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,210] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,210] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,210] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,210] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,210] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,210] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,210] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,211] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-10-05 10:45:23,211] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 14 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,211] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-10-05 10:45:23,212] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,212] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,212] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,213] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,213] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,213] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,213] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,214] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,214] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,214] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,214] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,215] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,215] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,215] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,215] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,215] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,215] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,216] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,216] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,216] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,216] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,216] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,216] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 19 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 20 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,220] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,220] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,220] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,220] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,220] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,220] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,221] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 19 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,221] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,221] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,221] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,221] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,221] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,221] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 19 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-10-05 10:45:23,244] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group c57a536f-f97b-4060-9039-ae771b91de49 in Empty state. Created a new member id consumer-c57a536f-f97b-4060-9039-ae771b91de49-2-f9d14682-3b4e-414a-9527-3a17217236ec and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:23,255] INFO [GroupCoordinator 1]: Preparing to rebalance group c57a536f-f97b-4060-9039-ae771b91de49 in state PreparingRebalance with old generation 0 (__consumer_offsets-10) (reason: Adding new member consumer-c57a536f-f97b-4060-9039-ae771b91de49-2-f9d14682-3b4e-414a-9527-3a17217236ec with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:24,927] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:24,927] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:24,933] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 HashMap() (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:24,934] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController) kafka | [2024-10-05 10:45:26,264] INFO [GroupCoordinator 1]: Stabilized group c57a536f-f97b-4060-9039-ae771b91de49 generation 1 (__consumer_offsets-10) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:26,282] INFO [GroupCoordinator 1]: Assignment received from leader consumer-c57a536f-f97b-4060-9039-ae771b91de49-2-f9d14682-3b4e-414a-9527-3a17217236ec for group c57a536f-f97b-4060-9039-ae771b91de49 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:49,399] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-18cf91c0-701a-489f-b802-8500a9b0800f and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:49,399] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 1bc68f55-3c38-4fc5-889a-f7dc8bd995b5 in Empty state. Created a new member id consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3-25b49f04-f485-4a5c-b427-86ae650a8f73 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:49,402] INFO [GroupCoordinator 1]: Preparing to rebalance group 1bc68f55-3c38-4fc5-889a-f7dc8bd995b5 in state PreparingRebalance with old generation 0 (__consumer_offsets-47) (reason: Adding new member consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3-25b49f04-f485-4a5c-b427-86ae650a8f73 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:49,403] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-18cf91c0-701a-489f-b802-8500a9b0800f with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:52,402] INFO [GroupCoordinator 1]: Stabilized group 1bc68f55-3c38-4fc5-889a-f7dc8bd995b5 generation 1 (__consumer_offsets-47) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:52,404] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:52,421] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-18cf91c0-701a-489f-b802-8500a9b0800f for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-10-05 10:45:52,421] INFO [GroupCoordinator 1]: Assignment received from leader consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3-25b49f04-f485-4a5c-b427-86ae650a8f73 for group 1bc68f55-3c38-4fc5-889a-f7dc8bd995b5 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) =================================== ======== Logs from mariadb ======== mariadb | 2024-10-05 10:45:13+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-10-05 10:45:14+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-10-05 10:45:14+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-10-05 10:45:14+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-10-05 10:45:14 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-10-05 10:45:14 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-10-05 10:45:14 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-10-05 10:45:15+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-10-05 10:45:15+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-10-05 10:45:15+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-10-05 10:45:15 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 100 ... mariadb | 2024-10-05 10:45:15 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-10-05 10:45:15 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-10-05 10:45:15 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-10-05 10:45:15 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-10-05 10:45:15 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-10-05 10:45:15 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-10-05 10:45:15 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-10-05 10:45:15 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-10-05 10:45:15 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-10-05 10:45:15 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-10-05 10:45:15 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-10-05 10:45:15 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-10-05 10:45:15 0 [Note] InnoDB: log sequence number 45602; transaction id 14 mariadb | 2024-10-05 10:45:15 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-10-05 10:45:15 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-10-05 10:45:15 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-10-05 10:45:15 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-10-05 10:45:15 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-10-05 10:45:16+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-10-05 10:45:18+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-10-05 10:45:18+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-10-05 10:45:18+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-10-05 10:45:18+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-10-05 10:45:19+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-10-05 10:45:19 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: Buffer pool(s) dump completed at 241005 10:45:19 mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: Shutdown completed; log sequence number 324060; transaction id 298 mariadb | 2024-10-05 10:45:19 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-10-05 10:45:19+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-10-05 10:45:19+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-10-05 10:45:19 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-10-05 10:45:19 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-10-05 10:45:19 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-10-05 10:45:19 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: log sequence number 324060; transaction id 299 mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-10-05 10:45:19 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-10-05 10:45:19 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-10-05 10:45:19 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. mariadb | 2024-10-05 10:45:19 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2024-10-05 10:45:19 0 [Note] Server socket created on IP: '::'. mariadb | 2024-10-05 10:45:19 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2024-10-05 10:45:19 0 [Note] InnoDB: Buffer pool(s) load completed at 241005 10:45:19 mariadb | 2024-10-05 10:45:19 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.5' (This connection closed normally without authentication) mariadb | 2024-10-05 10:45:19 17 [Warning] Aborted connection 17 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) mariadb | 2024-10-05 10:45:20 21 [Warning] Aborted connection 21 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) mariadb | 2024-10-05 10:45:20 31 [Warning] Aborted connection 31 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) mariadb | 2024-10-05 10:45:20 45 [Warning] Aborted connection 45 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) =================================== ======== Logs from api ======== policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.3:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.5:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.10) policy-api | policy-api | [2024-10-05T10:45:27.728+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-api | [2024-10-05T10:45:27.788+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 21 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-10-05T10:45:27.789+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-10-05T10:45:29.634+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-10-05T10:45:29.713+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 69 ms. Found 6 JPA repository interfaces. policy-api | [2024-10-05T10:45:30.140+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-10-05T10:45:30.141+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-10-05T10:45:30.720+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-10-05T10:45:30.729+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-10-05T10:45:30.732+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-10-05T10:45:30.732+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-api | [2024-10-05T10:45:30.826+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-10-05T10:45:30.827+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2973 ms policy-api | [2024-10-05T10:45:31.243+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-10-05T10:45:31.312+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final policy-api | [2024-10-05T10:45:31.362+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-10-05T10:45:31.681+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-10-05T10:45:31.709+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-10-05T10:45:31.796+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@288ca5f0 policy-api | [2024-10-05T10:45:31.798+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-10-05T10:45:33.731+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-10-05T10:45:33.735+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-10-05T10:45:34.726+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-10-05T10:45:35.567+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-10-05T10:45:36.740+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-10-05T10:45:36.961+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@5ef53e42, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@43ec61f0, org.springframework.security.web.context.SecurityContextHolderFilter@6707ab9, org.springframework.security.web.header.HeaderWriterFilter@3b14d63b, org.springframework.security.web.authentication.logout.LogoutFilter@3e83ab7b, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@7655a302, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@1844e563, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@3862381d, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@335d6f94, org.springframework.security.web.access.ExceptionTranslationFilter@4fa650e1, org.springframework.security.web.access.intercept.AuthorizationFilter@dffa7ce] policy-api | [2024-10-05T10:45:37.809+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-10-05T10:45:37.906+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-10-05T10:45:37.935+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-10-05T10:45:37.956+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.871 seconds (process running for 11.488) =================================== ======== Logs from csit-tests ======== policy-csit | Invoking the robot tests from: drools-pdp-test.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v CLAMP_K8S_TEST: policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Drools-Pdp-Test policy-csit | ============================================================================== policy-csit | Alive :: Runs Policy PDP Alive Check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify drools-pdp is exporting metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Drools-Pdp-Test | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 =================================== ======== Logs from policy-db-migrator ======== policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE jpapdpstatistics_enginestats a policy-db-migrator | JOIN pdpstatistics b policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp policy-db-migrator | SET a.id = b.id policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | -------------- policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaproperty policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | -------------- policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | msg policy-db-migrator | upgrade to 1100 completed policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | TRUNCATE TABLE sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE statistics_sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | name version policy-db-migrator | policyadmin 1300 policy-db-migrator | ID script operation from_version to_version tag success atTime policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:20 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:21 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:22 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:23 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:24 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:24 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:24 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:24 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:24 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:24 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0510241045200800u 1 2024-10-05 10:45:24 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 0510241045200900u 1 2024-10-05 10:45:24 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 0510241045200900u 1 2024-10-05 10:45:24 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 0510241045200900u 1 2024-10-05 10:45:24 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 0510241045200900u 1 2024-10-05 10:45:24 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 0510241045200900u 1 2024-10-05 10:45:24 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 0510241045200900u 1 2024-10-05 10:45:24 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0510241045200900u 1 2024-10-05 10:45:24 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0510241045200900u 1 2024-10-05 10:45:24 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0510241045200900u 1 2024-10-05 10:45:24 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 0510241045200900u 1 2024-10-05 10:45:24 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 0510241045200900u 1 2024-10-05 10:45:24 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 0510241045200900u 1 2024-10-05 10:45:24 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 0510241045200900u 1 2024-10-05 10:45:24 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 0510241045201000u 1 2024-10-05 10:45:25 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 0510241045201000u 1 2024-10-05 10:45:25 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 0510241045201000u 1 2024-10-05 10:45:25 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 0510241045201000u 1 2024-10-05 10:45:25 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 0510241045201000u 1 2024-10-05 10:45:25 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 0510241045201000u 1 2024-10-05 10:45:25 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 0510241045201000u 1 2024-10-05 10:45:25 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 0510241045201000u 1 2024-10-05 10:45:25 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 0510241045201000u 1 2024-10-05 10:45:25 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 0510241045201100u 1 2024-10-05 10:45:25 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 0510241045201200u 1 2024-10-05 10:45:25 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 0510241045201200u 1 2024-10-05 10:45:25 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 0510241045201200u 1 2024-10-05 10:45:25 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 0510241045201200u 1 2024-10-05 10:45:25 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 0510241045201300u 1 2024-10-05 10:45:25 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 0510241045201300u 1 2024-10-05 10:45:25 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 0510241045201300u 1 2024-10-05 10:45:25 policy-db-migrator | policyadmin: OK @ 1300 =================================== ======== Logs from drools-pdp ======== policy-drools-pdp | Waiting for mariadb port 3306... policy-drools-pdp | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-drools-pdp | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! policy-drools-pdp | Waiting for kafka port 9092... policy-drools-pdp | Connection to kafka (172.17.0.4) 9092 port [tcp/*] succeeded! policy-drools-pdp | -- /opt/app/policy/bin/pdpd-entrypoint.sh boot -- policy-drools-pdp | + operation=boot policy-drools-pdp | + dockerBoot policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- dockerBoot --' policy-drools-pdp | -- dockerBoot -- policy-drools-pdp | + set -x policy-drools-pdp | + set -e policy-drools-pdp | + configure policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- configure --' policy-drools-pdp | -- configure -- policy-drools-pdp | + set -x policy-drools-pdp | + reload policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- reload --' policy-drools-pdp | -- reload -- policy-drools-pdp | + set -x policy-drools-pdp | + systemConfs policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- systemConfs --' policy-drools-pdp | -- systemConfs -- policy-drools-pdp | + set -x policy-drools-pdp | + local confName policy-drools-pdp | + ls '/tmp/policy-install/config/*.conf' policy-drools-pdp | + return 0 policy-drools-pdp | + maven policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- maven --' policy-drools-pdp | -- maven -- policy-drools-pdp | + set -x policy-drools-pdp | + '[' -f /tmp/policy-install/config/settings.xml ] policy-drools-pdp | + '[' -f /tmp/policy-install/config/standalone-settings.xml ] policy-drools-pdp | + features policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- features --' policy-drools-pdp | -- features -- policy-drools-pdp | + set -x policy-drools-pdp | + ls '/tmp/policy-install/config/features*.zip' policy-drools-pdp | + return 0 policy-drools-pdp | + security policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- security --' policy-drools-pdp | -- security -- policy-drools-pdp | + set -x policy-drools-pdp | + '[' -f /tmp/policy-install/config/policy-keystore ] policy-drools-pdp | + '[' -f /tmp/policy-install/config/policy-truststore ] policy-drools-pdp | + serverConfig properties policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=properties' policy-drools-pdp | + ls /tmp/policy-install/config/engine-system.properties policy-drools-pdp | + ls /tmp/policy-install/config/engine-system.properties policy-drools-pdp | + echo 'configuration properties: /tmp/policy-install/config/engine-system.properties' policy-drools-pdp | configuration properties: /tmp/policy-install/config/engine-system.properties policy-drools-pdp | + cp -f /tmp/policy-install/config/engine-system.properties /opt/app/policy/config policy-drools-pdp | + serverConfig xml policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=xml' policy-drools-pdp | + ls '/tmp/policy-install/config/*.xml' policy-drools-pdp | + return 0 policy-drools-pdp | + serverConfig json policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=json' policy-drools-pdp | + ls '/tmp/policy-install/config/*.json' policy-drools-pdp | + return 0 policy-drools-pdp | + scripts pre.sh policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- scripts --' policy-drools-pdp | -- scripts -- policy-drools-pdp | + set -x policy-drools-pdp | + local 'scriptExtSuffix=pre.sh' policy-drools-pdp | + ls /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + PATH=/opt/app/policy/bin:/usr/lib/jvm/default-jvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + PATH=/usr/lib/jvm/java-17-openjdk/bin:/opt/app/policy/bin:/usr/lib/jvm/default-jvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + ls /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + echo 'executing script: /tmp/policy-install/config/noop.pre.sh' policy-drools-pdp | executing script: /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + source /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + chmod 644 /opt/app/policy/config/engine.properties /opt/app/policy/config/feature-lifecycle.properties policy-drools-pdp | + db policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- db --' policy-drools-pdp | -- db -- policy-drools-pdp | + set -x policy-drools-pdp | + '[' -z mariadb ] policy-drools-pdp | + '[' -z 3306 ] policy-drools-pdp | + echo 'Waiting for mariadb:3306 ...' policy-drools-pdp | + timeout 120 sh -c 'until nc -vz -w 20 "${SQL_HOST}" "${SQL_PORT}"; do echo -n "."; sleep 1; done' policy-drools-pdp | Waiting for mariadb:3306 ... policy-drools-pdp | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! policy-drools-pdp | + /opt/app/policy/bin/db-migrator -s ALL -o upgrade policy-drools-pdp | -- /opt/app/policy/bin/db-migrator -s ALL -o upgrade -- policy-drools-pdp | + '[' -z -s ] policy-drools-pdp | + shift policy-drools-pdp | + SCHEMA=ALL policy-drools-pdp | + shift policy-drools-pdp | + '[' -z -o ] policy-drools-pdp | + shift policy-drools-pdp | + OPERATION=upgrade policy-drools-pdp | + shift policy-drools-pdp | + '[' -z ] policy-drools-pdp | + '[' -z ALL ] policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + '[' -z mariadb ] policy-drools-pdp | + '[' -z policy_user ] policy-drools-pdp | + '[' -z policy_user ] policy-drools-pdp | + '[' -z 3306 ] policy-drools-pdp | + '[' -z ] policy-drools-pdp | + MYSQL_CMD=mysql policy-drools-pdp | + MYSQL='mysql -upolicy_user -ppolicy_user -h mariadb -P 3306' policy-drools-pdp | + mysql -upolicy_user -ppolicy_user -h mariadb -P 3306 --execute 'show databases;' policy-drools-pdp | + '[' ALL '=' ALL ] policy-drools-pdp | + SCHEMA='*' policy-drools-pdp | + ls -d '/opt/app/policy/etc/db/migration/*/' policy-drools-pdp | error: no databases available policy-drools-pdp | + SCHEMA_S= policy-drools-pdp | + '[' -z ] policy-drools-pdp | + echo 'error: no databases available' policy-drools-pdp | + exit 0 policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + policy exec policy-drools-pdp | -- /opt/app/policy/bin/policy exec -- policy-drools-pdp | + BIN_SCRIPT=bin/policy-management-controller policy-drools-pdp | + OPERATION=none policy-drools-pdp | + '[' -z exec ] policy-drools-pdp | + OPERATION=exec policy-drools-pdp | + shift policy-drools-pdp | + '[' -z ] policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + policy_exec policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- policy_exec --' policy-drools-pdp | + set -x policy-drools-pdp | + cd /opt/app/policy policy-drools-pdp | + check_x_file bin/policy-management-controller policy-drools-pdp | -- policy_exec -- policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- check_x_file --' policy-drools-pdp | -- check_x_file -- policy-drools-pdp | + set -x policy-drools-pdp | + FILE=bin/policy-management-controller policy-drools-pdp | + '[[' '!' -f bin/policy-management-controller '||' '!' -x bin/policy-management-controller ]] policy-drools-pdp | + return 0 policy-drools-pdp | + bin/policy-management-controller exec policy-drools-pdp | -- bin/policy-management-controller exec -- policy-drools-pdp | + _DIR=/opt/app/policy policy-drools-pdp | + _LOGS=/var/log/onap/policy/pdpd policy-drools-pdp | + '[' -z /var/log/onap/policy/pdpd ] policy-drools-pdp | + CONTROLLER=policy-management-controller policy-drools-pdp | + RETVAL=0 policy-drools-pdp | + _PIDFILE=/opt/app/policy/PID policy-drools-pdp | + exec_start policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- exec_start --' policy-drools-pdp | + set -x policy-drools-pdp | -- exec_start -- policy-drools-pdp | + status policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- status --' policy-drools-pdp | + set -x policy-drools-pdp | + '[' -f /opt/app/policy/PID ] policy-drools-pdp | -- status -- policy-drools-pdp | + '[' true ] policy-drools-pdp | + pidof -s java policy-drools-pdp | + _PID= policy-drools-pdp | Policy Management (no pidfile) is NOT running policy-drools-pdp | + _STATUS='Policy Management (no pidfile) is NOT running' policy-drools-pdp | + _RUNNING=0 policy-drools-pdp | + '[' 0 '=' 1 ] policy-drools-pdp | + RETVAL=1 policy-drools-pdp | + echo 'Policy Management (no pidfile) is NOT running' policy-drools-pdp | + '[' 0 '=' 1 ] policy-drools-pdp | + preRunning policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- preRunning --' policy-drools-pdp | + set -x policy-drools-pdp | + mkdir -p /var/log/onap/policy/pdpd policy-drools-pdp | -- preRunning -- policy-drools-pdp | + xargs -I X printf ':%s' X policy-drools-pdp | + ls /opt/app/policy/lib/accessors-smart-2.5.0.jar /opt/app/policy/lib/angus-activation-2.0.2.jar /opt/app/policy/lib/annotations-13.0.jar /opt/app/policy/lib/ant-1.10.14.jar /opt/app/policy/lib/ant-launcher-1.10.14.jar /opt/app/policy/lib/antlr-2.7.7.jar /opt/app/policy/lib/antlr-runtime-3.5.2.jar /opt/app/policy/lib/antlr4-runtime-4.10.1.jar /opt/app/policy/lib/aopalliance-1.0.jar /opt/app/policy/lib/aopalliance-repackaged-3.0.5.jar /opt/app/policy/lib/asm-9.3.jar /opt/app/policy/lib/byte-buddy-1.14.13.jar /opt/app/policy/lib/caffeine-2.9.3.jar /opt/app/policy/lib/capabilities-2.1.3.jar /opt/app/policy/lib/checker-qual-3.42.0.jar /opt/app/policy/lib/classgraph-4.8.165.jar /opt/app/policy/lib/classmate-1.5.1.jar /opt/app/policy/lib/common-parameters-2.1.3.jar /opt/app/policy/lib/commons-beanutils-1.9.4.jar /opt/app/policy/lib/commons-cli-1.5.0.jar /opt/app/policy/lib/commons-codec-1.16.0.jar /opt/app/policy/lib/commons-collections4-4.4.jar /opt/app/policy/lib/commons-configuration2-2.8.0.jar /opt/app/policy/lib/commons-io-2.13.0.jar /opt/app/policy/lib/commons-jexl3-3.2.1.jar /opt/app/policy/lib/commons-lang3-3.14.0.jar /opt/app/policy/lib/commons-logging-1.2.jar /opt/app/policy/lib/commons-net-3.9.0.jar /opt/app/policy/lib/commons-text-1.10.0.jar /opt/app/policy/lib/dom4j-2.1.3.jar /opt/app/policy/lib/drools-base-8.40.1.Final.jar /opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar /opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar /opt/app/policy/lib/drools-commands-8.40.1.Final.jar /opt/app/policy/lib/drools-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-core-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar /opt/app/policy/lib/drools-ecj-8.40.1.Final.jar /opt/app/policy/lib/drools-engine-8.40.1.Final.jar /opt/app/policy/lib/drools-io-8.40.1.Final.jar /opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar /opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar /opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar /opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar /opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar /opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar /opt/app/policy/lib/drools-tms-8.40.1.Final.jar /opt/app/policy/lib/drools-util-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar /opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar /opt/app/policy/lib/ecj-3.33.0.jar /opt/app/policy/lib/error_prone_annotations-2.23.0.jar /opt/app/policy/lib/failureaccess-1.0.2.jar /opt/app/policy/lib/feature-lifecycle-2.1.3.jar /opt/app/policy/lib/gson-2.1.3.jar /opt/app/policy/lib/gson-2.10.1.jar /opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar /opt/app/policy/lib/guava-33.0.0-jre.jar /opt/app/policy/lib/guice-4.2.2-no_aop.jar /opt/app/policy/lib/hibernate-commons-annotations-6.0.6.Final.jar /opt/app/policy/lib/hibernate-core-6.3.2.Final.jar /opt/app/policy/lib/hibernate-core-jakarta-5.6.15.Final.jar /opt/app/policy/lib/hibernate-validator-8.0.1.Final.jar /opt/app/policy/lib/hk2-api-3.0.5.jar /opt/app/policy/lib/hk2-locator-3.0.5.jar /opt/app/policy/lib/hk2-utils-3.0.5.jar /opt/app/policy/lib/httpclient-4.5.14.jar /opt/app/policy/lib/httpcore-4.4.16.jar /opt/app/policy/lib/istack-commons-runtime-4.1.2.jar /opt/app/policy/lib/j2objc-annotations-2.8.jar /opt/app/policy/lib/jackson-annotations-2.16.1.jar /opt/app/policy/lib/jackson-core-2.16.1.jar /opt/app/policy/lib/jackson-databind-2.16.1.jar /opt/app/policy/lib/jackson-dataformat-yaml-2.16.1.jar /opt/app/policy/lib/jackson-datatype-jsr310-2.16.1.jar /opt/app/policy/lib/jackson-jakarta-rs-base-2.16.1.jar /opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.16.1.jar /opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.16.1.jar /opt/app/policy/lib/jakarta.activation-api-2.1.2.jar /opt/app/policy/lib/jakarta.annotation-api-2.1.1.jar /opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar /opt/app/policy/lib/jakarta.el-api-3.0.3.jar /opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar /opt/app/policy/lib/jakarta.inject-api-2.0.1.jar /opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar /opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar /opt/app/policy/lib/jakarta.servlet-api-6.0.0.jar /opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar /opt/app/policy/lib/jakarta.validation-api-3.0.2.jar /opt/app/policy/lib/jakarta.ws.rs-api-3.1.0.jar /opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar /opt/app/policy/lib/jandex-2.4.2.Final.jar /opt/app/policy/lib/jandex-3.1.2.jar /opt/app/policy/lib/javaparser-core-3.24.2.jar /opt/app/policy/lib/javassist-3.29.2-GA.jar /opt/app/policy/lib/javax.inject-1.jar /opt/app/policy/lib/javax.inject-2.5.0-b62.jar /opt/app/policy/lib/jaxb-core-4.0.5.jar /opt/app/policy/lib/jaxb-impl-4.0.5.jar /opt/app/policy/lib/jaxb-runtime-4.0.5.jar /opt/app/policy/lib/jaxb-xjc-4.0.5.jar /opt/app/policy/lib/jboss-logging-3.5.3.Final.jar /opt/app/policy/lib/jcl-over-slf4j-2.0.12.jar /opt/app/policy/lib/jersey-client-3.1.5.jar /opt/app/policy/lib/jersey-common-3.1.5.jar /opt/app/policy/lib/jersey-container-servlet-core-3.1.5.jar /opt/app/policy/lib/jersey-hk2-3.1.5.jar /opt/app/policy/lib/jersey-server-3.1.5.jar /opt/app/policy/lib/jetty-http-11.0.20.jar /opt/app/policy/lib/jetty-io-11.0.20.jar /opt/app/policy/lib/jetty-jakarta-servlet-api-5.0.2.jar /opt/app/policy/lib/jetty-security-11.0.20.jar /opt/app/policy/lib/jetty-server-11.0.20.jar /opt/app/policy/lib/jetty-servlet-11.0.20.jar /opt/app/policy/lib/jetty-util-11.0.20.jar /opt/app/policy/lib/jna-5.13.0.jar /opt/app/policy/lib/jna-platform-5.13.0.jar /opt/app/policy/lib/json-path-2.9.0.jar /opt/app/policy/lib/json-smart-2.5.0.jar /opt/app/policy/lib/jsr305-3.0.2.jar /opt/app/policy/lib/kafka-clients-3.6.1.jar /opt/app/policy/lib/kie-api-8.40.1.Final.jar /opt/app/policy/lib/kie-ci-8.40.1.Final.jar /opt/app/policy/lib/kie-internal-8.40.1.Final.jar /opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar /opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar /opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar /opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar /opt/app/policy/lib/kotlin-reflect-1.9.23.jar /opt/app/policy/lib/kotlin-stdlib-1.9.23.jar /opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar /opt/app/policy/lib/logback-classic-1.4.14.jar /opt/app/policy/lib/logback-core-1.4.14.jar /opt/app/policy/lib/lombok-1.18.30.jar /opt/app/policy/lib/lz4-java-1.8.0.jar /opt/app/policy/lib/mariadb-java-client-3.3.3.jar /opt/app/policy/lib/maven-artifact-3.8.6.jar /opt/app/policy/lib/maven-builder-support-3.8.6.jar /opt/app/policy/lib/maven-compat-3.8.6.jar /opt/app/policy/lib/maven-core-3.8.6.jar /opt/app/policy/lib/maven-model-3.8.6.jar /opt/app/policy/lib/maven-model-builder-3.8.6.jar /opt/app/policy/lib/maven-plugin-api-3.8.6.jar /opt/app/policy/lib/maven-repository-metadata-3.8.6.jar /opt/app/policy/lib/maven-resolver-api-1.6.3.jar /opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar /opt/app/policy/lib/maven-resolver-impl-1.6.3.jar /opt/app/policy/lib/maven-resolver-provider-3.8.6.jar /opt/app/policy/lib/maven-resolver-spi-1.6.3.jar /opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar /opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar /opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar /opt/app/policy/lib/maven-resolver-util-1.6.3.jar /opt/app/policy/lib/maven-settings-3.8.6.jar /opt/app/policy/lib/maven-settings-builder-3.8.6.jar /opt/app/policy/lib/maven-shared-utils-3.3.4.jar /opt/app/policy/lib/medeia-validator-core-1.1.1.jar /opt/app/policy/lib/medeia-validator-gson-1.1.1.jar /opt/app/policy/lib/mvel2-2.5.2.Final.jar /opt/app/policy/lib/mxparser-1.2.2.jar /opt/app/policy/lib/opentelemetry-api-1.25.0.jar /opt/app/policy/lib/opentelemetry-context-1.25.0.jar /opt/app/policy/lib/opentelemetry-instrumentation-api-1.25.0.jar /opt/app/policy/lib/opentelemetry-instrumentation-api-semconv-1.25.0-alpha.jar /opt/app/policy/lib/opentelemetry-kafka-clients-2.6-1.25.0-alpha.jar /opt/app/policy/lib/opentelemetry-kafka-clients-common-1.25.0-alpha.jar /opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar /opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar /opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar /opt/app/policy/lib/osgi-resource-locator-1.0.3.jar /opt/app/policy/lib/plexus-cipher-2.0.jar /opt/app/policy/lib/plexus-classworlds-2.6.0.jar /opt/app/policy/lib/plexus-component-annotations-2.1.0.jar /opt/app/policy/lib/plexus-interpolation-1.26.jar /opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar /opt/app/policy/lib/plexus-utils-3.5.0.jar /opt/app/policy/lib/policy-core-2.1.3.jar /opt/app/policy/lib/policy-domains-2.1.3.jar /opt/app/policy/lib/policy-endpoints-2.1.3.jar /opt/app/policy/lib/policy-management-2.1.3.jar /opt/app/policy/lib/policy-models-base-3.1.3.jar /opt/app/policy/lib/policy-models-dao-3.1.3.jar /opt/app/policy/lib/policy-models-errors-3.1.3.jar /opt/app/policy/lib/policy-models-examples-3.1.3.jar /opt/app/policy/lib/policy-models-pdp-3.1.3.jar /opt/app/policy/lib/policy-models-tosca-3.1.3.jar /opt/app/policy/lib/policy-utils-2.1.3.jar /opt/app/policy/lib/postgresql-42.7.2.jar /opt/app/policy/lib/protobuf-java-3.22.0.jar /opt/app/policy/lib/re2j-1.7.jar /opt/app/policy/lib/simpleclient-0.16.0.jar /opt/app/policy/lib/simpleclient_common-0.16.0.jar /opt/app/policy/lib/simpleclient_hotspot-0.16.0.jar /opt/app/policy/lib/simpleclient_logback-0.16.0.jar /opt/app/policy/lib/simpleclient_servlet_common-0.16.0.jar /opt/app/policy/lib/simpleclient_servlet_jakarta-0.16.0.jar /opt/app/policy/lib/simpleclient_tracer_common-0.16.0.jar /opt/app/policy/lib/simpleclient_tracer_otel-0.16.0.jar /opt/app/policy/lib/simpleclient_tracer_otel_agent-0.16.0.jar /opt/app/policy/lib/slf4j-api-2.0.12.jar /opt/app/policy/lib/snakeyaml-2.2.jar /opt/app/policy/lib/snappy-java-1.1.10.5.jar /opt/app/policy/lib/swagger-annotations-2.2.20.jar /opt/app/policy/lib/swagger-annotations-jakarta-2.2.20.jar /opt/app/policy/lib/swagger-core-jakarta-2.2.20.jar /opt/app/policy/lib/swagger-integration-jakarta-2.2.20.jar /opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.20.jar /opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.20.jar /opt/app/policy/lib/swagger-models-jakarta-2.2.20.jar /opt/app/policy/lib/txw2-4.0.5.jar /opt/app/policy/lib/utils-2.1.3.jar /opt/app/policy/lib/waffle-jna-3.3.0.jar /opt/app/policy/lib/wagon-http-3.5.1.jar /opt/app/policy/lib/wagon-http-shared-3.5.1.jar /opt/app/policy/lib/wagon-provider-api-3.5.1.jar /opt/app/policy/lib/xmlpull-1.1.3.1.jar /opt/app/policy/lib/xstream-1.4.20.jar /opt/app/policy/lib/zstd-jni-1.5.5-1.jar policy-drools-pdp | + CP=:/opt/app/policy/lib/accessors-smart-2.5.0.jar:/opt/app/policy/lib/angus-activation-2.0.2.jar:/opt/app/policy/lib/annotations-13.0.jar:/opt/app/policy/lib/ant-1.10.14.jar:/opt/app/policy/lib/ant-launcher-1.10.14.jar:/opt/app/policy/lib/antlr-2.7.7.jar:/opt/app/policy/lib/antlr-runtime-3.5.2.jar:/opt/app/policy/lib/antlr4-runtime-4.10.1.jar:/opt/app/policy/lib/aopalliance-1.0.jar:/opt/app/policy/lib/aopalliance-repackaged-3.0.5.jar:/opt/app/policy/lib/asm-9.3.jar:/opt/app/policy/lib/byte-buddy-1.14.13.jar:/opt/app/policy/lib/caffeine-2.9.3.jar:/opt/app/policy/lib/capabilities-2.1.3.jar:/opt/app/policy/lib/checker-qual-3.42.0.jar:/opt/app/policy/lib/classgraph-4.8.165.jar:/opt/app/policy/lib/classmate-1.5.1.jar:/opt/app/policy/lib/common-parameters-2.1.3.jar:/opt/app/policy/lib/commons-beanutils-1.9.4.jar:/opt/app/policy/lib/commons-cli-1.5.0.jar:/opt/app/policy/lib/commons-codec-1.16.0.jar:/opt/app/policy/lib/commons-collections4-4.4.jar:/opt/app/policy/lib/commons-configuration2-2.8.0.jar:/opt/app/policy/lib/commons-io-2.13.0.jar:/opt/app/policy/lib/commons-jexl3-3.2.1.jar:/opt/app/policy/lib/commons-lang3-3.14.0.jar:/opt/app/policy/lib/commons-logging-1.2.jar:/opt/app/policy/lib/commons-net-3.9.0.jar:/opt/app/policy/lib/commons-text-1.10.0.jar:/opt/app/policy/lib/dom4j-2.1.3.jar:/opt/app/policy/lib/drools-base-8.40.1.Final.jar:/opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar:/opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar:/opt/app/policy/lib/drools-commands-8.40.1.Final.jar:/opt/app/policy/lib/drools-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-core-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-ecj-8.40.1.Final.jar:/opt/app/policy/lib/drools-engine-8.40.1.Final.jar:/opt/app/policy/lib/drools-io-8.40.1.Final.jar:/opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar:/opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar:/opt/app/policy/lib/drools-tms-8.40.1.Final.jar:/opt/app/policy/lib/drools-util-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar:/opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar:/opt/app/policy/lib/ecj-3.33.0.jar:/opt/app/policy/lib/error_prone_annotations-2.23.0.jar:/opt/app/policy/lib/failureaccess-1.0.2.jar:/opt/app/policy/lib/feature-lifecycle-2.1.3.jar:/opt/app/policy/lib/gson-2.1.3.jar:/opt/app/policy/lib/gson-2.10.1.jar:/opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar:/opt/app/policy/lib/guava-33.0.0-jre.jar:/opt/app/policy/lib/guice-4.2.2-no_aop.jar:/opt/app/policy/lib/hibernate-commons-annotations-6.0.6.Final.jar:/opt/app/policy/lib/hibernate-core-6.3.2.Final.jar:/opt/app/policy/lib/hibernate-core-jakarta-5.6.15.Final.jar:/opt/app/policy/lib/hibernate-validator-8.0.1.Final.jar:/opt/app/policy/lib/hk2-api-3.0.5.jar:/opt/app/policy/lib/hk2-locator-3.0.5.jar:/opt/app/policy/lib/hk2-utils-3.0.5.jar:/opt/app/policy/lib/httpclient-4.5.14.jar:/opt/app/policy/lib/httpcore-4.4.16.jar:/opt/app/policy/lib/istack-commons-runtime-4.1.2.jar:/opt/app/policy/lib/j2objc-annotations-2.8.jar:/opt/app/policy/lib/jackson-annotations-2.16.1.jar:/opt/app/policy/lib/jackson-core-2.16.1.jar:/opt/app/policy/lib/jackson-databind-2.16.1.jar:/opt/app/policy/lib/jackson-dataformat-yaml-2.16.1.jar:/opt/app/policy/lib/jackson-datatype-jsr310-2.16.1.jar:/opt/app/policy/lib/jackson-jakarta-rs-base-2.16.1.jar:/opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.16.1.jar:/opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.16.1.jar:/opt/app/policy/lib/jakarta.activation-api-2.1.2.jar:/opt/app/policy/lib/jakarta.annotation-api-2.1.1.jar:/opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar:/opt/app/policy/lib/jakarta.el-api-3.0.3.jar:/opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar:/opt/app/policy/lib/jakarta.inject-api-2.0.1.jar:/opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar:/opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar:/opt/app/policy/lib/jakarta.servlet-api-6.0.0.jar:/opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar:/opt/app/policy/lib/jakarta.validation-api-3.0.2.jar:/opt/app/policy/lib/jakarta.ws.rs-api-3.1.0.jar:/opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar:/opt/app/policy/lib/jandex-2.4.2.Final.jar:/opt/app/policy/lib/jandex-3.1.2.jar:/opt/app/policy/lib/javaparser-core-3.24.2.jar:/opt/app/policy/lib/javassist-3.29.2-GA.jar:/opt/app/policy/lib/javax.inject-1.jar:/opt/app/policy/lib/javax.inject-2.5.0-b62.jar:/opt/app/policy/lib/jaxb-core-4.0.5.jar:/opt/app/policy/lib/jaxb-impl-4.0.5.jar:/opt/app/policy/lib/jaxb-runtime-4.0.5.jar:/opt/app/policy/lib/jaxb-xjc-4.0.5.jar:/opt/app/policy/lib/jboss-logging-3.5.3.Final.jar:/opt/app/policy/lib/jcl-over-slf4j-2.0.12.jar:/opt/app/policy/lib/jersey-client-3.1.5.jar:/opt/app/policy/lib/jersey-common-3.1.5.jar:/opt/app/policy/lib/jersey-container-servlet-core-3.1.5.jar:/opt/app/policy/lib/jersey-hk2-3.1.5.jar:/opt/app/policy/lib/jersey-server-3.1.5.jar:/opt/app/policy/lib/jetty-http-11.0.20.jar:/opt/app/policy/lib/jetty-io-11.0.20.jar:/opt/app/policy/lib/jetty-jakarta-servlet-api-5.0.2.jar:/opt/app/policy/lib/jetty-security-11.0.20.jar:/opt/app/policy/lib/jetty-server-11.0.20.jar:/opt/app/policy/lib/jetty-servlet-11.0.20.jar:/opt/app/policy/lib/jetty-util-11.0.20.jar:/opt/app/policy/lib/jna-5.13.0.jar:/opt/app/policy/lib/jna-platform-5.13.0.jar:/opt/app/policy/lib/json-path-2.9.0.jar:/opt/app/policy/lib/json-smart-2.5.0.jar:/opt/app/policy/lib/jsr305-3.0.2.jar:/opt/app/policy/lib/kafka-clients-3.6.1.jar:/opt/app/policy/lib/kie-api-8.40.1.Final.jar:/opt/app/policy/lib/kie-ci-8.40.1.Final.jar:/opt/app/policy/lib/kie-internal-8.40.1.Final.jar:/opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar:/opt/app/policy/lib/kotlin-reflect-1.9.23.jar:/opt/app/policy/lib/kotlin-stdlib-1.9.23.jar:/opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/app/policy/lib/logback-classic-1.4.14.jar:/opt/app/policy/lib/logback-core-1.4.14.jar:/opt/app/policy/lib/lombok-1.18.30.jar:/opt/app/policy/lib/lz4-java-1.8.0.jar:/opt/app/policy/lib/mariadb-java-client-3.3.3.jar:/opt/app/policy/lib/maven-artifact-3.8.6.jar:/opt/app/policy/lib/maven-builder-support-3.8.6.jar:/opt/app/policy/lib/maven-compat-3.8.6.jar:/opt/app/policy/lib/maven-core-3.8.6.jar:/opt/app/policy/lib/maven-model-3.8.6.jar:/opt/app/policy/lib/maven-model-builder-3.8.6.jar:/opt/app/policy/lib/maven-plugin-api-3.8.6.jar:/opt/app/policy/lib/maven-repository-metadata-3.8.6.jar:/opt/app/policy/lib/maven-resolver-api-1.6.3.jar:/opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar:/opt/app/policy/lib/maven-resolver-impl-1.6.3.jar:/opt/app/policy/lib/maven-resolver-provider-3.8.6.jar:/opt/app/policy/lib/maven-resolver-spi-1.6.3.jar:/opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar:/opt/app/policy/lib/maven-resolver-util-1.6.3.jar:/opt/app/policy/lib/maven-settings-3.8.6.jar:/opt/app/policy/lib/maven-settings-builder-3.8.6.jar:/opt/app/policy/lib/maven-shared-utils-3.3.4.jar:/opt/app/policy/lib/medeia-validator-core-1.1.1.jar:/opt/app/policy/lib/medeia-validator-gson-1.1.1.jar:/opt/app/policy/lib/mvel2-2.5.2.Final.jar:/opt/app/policy/lib/mxparser-1.2.2.jar:/opt/app/policy/lib/opentelemetry-api-1.25.0.jar:/opt/app/policy/lib/opentelemetry-context-1.25.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-1.25.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-2.6-1.25.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-common-1.25.0-alpha.jar:/opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar:/opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar:/opt/app/policy/lib/osgi-resource-locator-1.0.3.jar:/opt/app/policy/lib/plexus-cipher-2.0.jar:/opt/app/policy/lib/plexus-classworlds-2.6.0.jar:/opt/app/policy/lib/plexus-component-annotations-2.1.0.jar:/opt/app/policy/lib/plexus-interpolation-1.26.jar:/opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar:/opt/app/policy/lib/plexus-utils-3.5.0.jar:/opt/app/policy/lib/policy-core-2.1.3.jar:/opt/app/policy/lib/policy-domains-2.1.3.jar:/opt/app/policy/lib/policy-endpoints-2.1.3.jar:/opt/app/policy/lib/policy-management-2.1.3.jar:/opt/app/policy/lib/policy-models-base-3.1.3.jar:/opt/app/policy/lib/policy-models-dao-3.1.3.jar:/opt/app/policy/lib/policy-models-errors-3.1.3.jar:/opt/app/policy/lib/policy-models-examples-3.1.3.jar:/opt/app/policy/lib/policy-models-pdp-3.1.3.jar:/opt/app/policy/lib/policy-models-tosca-3.1.3.jar:/opt/app/policy/lib/policy-utils-2.1.3.jar:/opt/app/policy/lib/postgresql-42.7.2.jar:/opt/app/policy/lib/protobuf-java-3.22.0.jar:/opt/app/policy/lib/re2j-1.7.jar:/opt/app/policy/lib/simpleclient-0.16.0.jar:/opt/app/policy/lib/simpleclient_common-0.16.0.jar:/opt/app/policy/lib/simpleclient_hotspot-0.16.0.jar:/opt/app/policy/lib/simpleclient_logback-0.16.0.jar:/opt/app/policy/lib/simpleclient_servlet_common-0.16.0.jar:/opt/app/policy/lib/simpleclient_servlet_jakarta-0.16.0.jar:/opt/app/policy/lib/simpleclient_tracer_common-0.16.0.jar:/opt/app/policy/lib/simpleclient_tracer_otel-0.16.0.jar:/opt/app/policy/lib/simpleclient_tracer_otel_agent-0.16.0.jar:/opt/app/policy/lib/slf4j-api-2.0.12.jar:/opt/app/policy/lib/snakeyaml-2.2.jar:/opt/app/policy/lib/snappy-java-1.1.10.5.jar:/opt/app/policy/lib/swagger-annotations-2.2.20.jar:/opt/app/policy/lib/swagger-annotations-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-core-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-integration-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-models-jakarta-2.2.20.jar:/opt/app/policy/lib/txw2-4.0.5.jar:/opt/app/policy/lib/utils-2.1.3.jar:/opt/app/policy/lib/waffle-jna-3.3.0.jar:/opt/app/policy/lib/wagon-http-3.5.1.jar:/opt/app/policy/lib/wagon-http-shared-3.5.1.jar:/opt/app/policy/lib/wagon-provider-api-3.5.1.jar:/opt/app/policy/lib/xmlpull-1.1.3.1.jar:/opt/app/policy/lib/xstream-1.4.20.jar:/opt/app/policy/lib/zstd-jni-1.5.5-1.jar policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + /opt/app/policy/bin/configure-maven policy-drools-pdp | + HOME_M2=/home/policy/.m2 policy-drools-pdp | + mkdir -p /home/policy/.m2 policy-drools-pdp | + '[' -z http://nexus:8081/nexus/content/repositories/snapshots/ ] policy-drools-pdp | + ln -s -f /opt/app/policy/etc/m2/settings.xml /home/policy/.m2/settings.xml policy-drools-pdp | + '[' -f /opt/app/policy/config/system.properties ] policy-drools-pdp | + sed -n -e 's/^[ \t]*\([^ \t#]*\)[ \t]*=[ \t]*\(.*\)$/-D\1=\2/p' /opt/app/policy/config/system.properties policy-drools-pdp | + systemProperties='-Dlogback.configurationFile=config/logback.xml' policy-drools-pdp | + cd /opt/app/policy policy-drools-pdp | + exec /usr/lib/jvm/java-17-openjdk/bin/java -server -Xms512m -Xmx512m -cp /opt/app/policy/config:/opt/app/policy/lib::/opt/app/policy/lib/accessors-smart-2.5.0.jar:/opt/app/policy/lib/angus-activation-2.0.2.jar:/opt/app/policy/lib/annotations-13.0.jar:/opt/app/policy/lib/ant-1.10.14.jar:/opt/app/policy/lib/ant-launcher-1.10.14.jar:/opt/app/policy/lib/antlr-2.7.7.jar:/opt/app/policy/lib/antlr-runtime-3.5.2.jar:/opt/app/policy/lib/antlr4-runtime-4.10.1.jar:/opt/app/policy/lib/aopalliance-1.0.jar:/opt/app/policy/lib/aopalliance-repackaged-3.0.5.jar:/opt/app/policy/lib/asm-9.3.jar:/opt/app/policy/lib/byte-buddy-1.14.13.jar:/opt/app/policy/lib/caffeine-2.9.3.jar:/opt/app/policy/lib/capabilities-2.1.3.jar:/opt/app/policy/lib/checker-qual-3.42.0.jar:/opt/app/policy/lib/classgraph-4.8.165.jar:/opt/app/policy/lib/classmate-1.5.1.jar:/opt/app/policy/lib/common-parameters-2.1.3.jar:/opt/app/policy/lib/commons-beanutils-1.9.4.jar:/opt/app/policy/lib/commons-cli-1.5.0.jar:/opt/app/policy/lib/commons-codec-1.16.0.jar:/opt/app/policy/lib/commons-collections4-4.4.jar:/opt/app/policy/lib/commons-configuration2-2.8.0.jar:/opt/app/policy/lib/commons-io-2.13.0.jar:/opt/app/policy/lib/commons-jexl3-3.2.1.jar:/opt/app/policy/lib/commons-lang3-3.14.0.jar:/opt/app/policy/lib/commons-logging-1.2.jar:/opt/app/policy/lib/commons-net-3.9.0.jar:/opt/app/policy/lib/commons-text-1.10.0.jar:/opt/app/policy/lib/dom4j-2.1.3.jar:/opt/app/policy/lib/drools-base-8.40.1.Final.jar:/opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar:/opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar:/opt/app/policy/lib/drools-commands-8.40.1.Final.jar:/opt/app/policy/lib/drools-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-core-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-ecj-8.40.1.Final.jar:/opt/app/policy/lib/drools-engine-8.40.1.Final.jar:/opt/app/policy/lib/drools-io-8.40.1.Final.jar:/opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar:/opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar:/opt/app/policy/lib/drools-tms-8.40.1.Final.jar:/opt/app/policy/lib/drools-util-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar:/opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar:/opt/app/policy/lib/ecj-3.33.0.jar:/opt/app/policy/lib/error_prone_annotations-2.23.0.jar:/opt/app/policy/lib/failureaccess-1.0.2.jar:/opt/app/policy/lib/feature-lifecycle-2.1.3.jar:/opt/app/policy/lib/gson-2.1.3.jar:/opt/app/policy/lib/gson-2.10.1.jar:/opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar:/opt/app/policy/lib/guava-33.0.0-jre.jar:/opt/app/policy/lib/guice-4.2.2-no_aop.jar:/opt/app/policy/lib/hibernate-commons-annotations-6.0.6.Final.jar:/opt/app/policy/lib/hibernate-core-6.3.2.Final.jar:/opt/app/policy/lib/hibernate-core-jakarta-5.6.15.Final.jar:/opt/app/policy/lib/hibernate-validator-8.0.1.Final.jar:/opt/app/policy/lib/hk2-api-3.0.5.jar:/opt/app/policy/lib/hk2-locator-3.0.5.jar:/opt/app/policy/lib/hk2-utils-3.0.5.jar:/opt/app/policy/lib/httpclient-4.5.14.jar:/opt/app/policy/lib/httpcore-4.4.16.jar:/opt/app/policy/lib/istack-commons-runtime-4.1.2.jar:/opt/app/policy/lib/j2objc-annotations-2.8.jar:/opt/app/policy/lib/jackson-annotations-2.16.1.jar:/opt/app/policy/lib/jackson-core-2.16.1.jar:/opt/app/policy/lib/jackson-databind-2.16.1.jar:/opt/app/policy/lib/jackson-dataformat-yaml-2.16.1.jar:/opt/app/policy/lib/jackson-datatype-jsr310-2.16.1.jar:/opt/app/policy/lib/jackson-jakarta-rs-base-2.16.1.jar:/opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.16.1.jar:/opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.16.1.jar:/opt/app/policy/lib/jakarta.activation-api-2.1.2.jar:/opt/app/policy/lib/jakarta.annotation-api-2.1.1.jar:/opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar:/opt/app/policy/lib/jakarta.el-api-3.0.3.jar:/opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar:/opt/app/policy/lib/jakarta.inject-api-2.0.1.jar:/opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar:/opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar:/opt/app/policy/lib/jakarta.servlet-api-6.0.0.jar:/opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar:/opt/app/policy/lib/jakarta.validation-api-3.0.2.jar:/opt/app/policy/lib/jakarta.ws.rs-api-3.1.0.jar:/opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar:/opt/app/policy/lib/jandex-2.4.2.Final.jar:/opt/app/policy/lib/jandex-3.1.2.jar:/opt/app/policy/lib/javaparser-core-3.24.2.jar:/opt/app/policy/lib/javassist-3.29.2-GA.jar:/opt/app/policy/lib/javax.inject-1.jar:/opt/app/policy/lib/javax.inject-2.5.0-b62.jar:/opt/app/policy/lib/jaxb-core-4.0.5.jar:/opt/app/policy/lib/jaxb-impl-4.0.5.jar:/opt/app/policy/lib/jaxb-runtime-4.0.5.jar:/opt/app/policy/lib/jaxb-xjc-4.0.5.jar:/opt/app/policy/lib/jboss-logging-3.5.3.Final.jar:/opt/app/policy/lib/jcl-over-slf4j-2.0.12.jar:/opt/app/policy/lib/jersey-client-3.1.5.jar:/opt/app/policy/lib/jersey-common-3.1.5.jar:/opt/app/policy/lib/jersey-container-servlet-core-3.1.5.jar:/opt/app/policy/lib/jersey-hk2-3.1.5.jar:/opt/app/policy/lib/jersey-server-3.1.5.jar:/opt/app/policy/lib/jetty-http-11.0.20.jar:/opt/app/policy/lib/jetty-io-11.0.20.jar:/opt/app/policy/lib/jetty-jakarta-servlet-api-5.0.2.jar:/opt/app/policy/lib/jetty-security-11.0.20.jar:/opt/app/policy/lib/jetty-server-11.0.20.jar:/opt/app/policy/lib/jetty-servlet-11.0.20.jar:/opt/app/policy/lib/jetty-util-11.0.20.jar:/opt/app/policy/lib/jna-5.13.0.jar:/opt/app/policy/lib/jna-platform-5.13.0.jar:/opt/app/policy/lib/json-path-2.9.0.jar:/opt/app/policy/lib/json-smart-2.5.0.jar:/opt/app/policy/lib/jsr305-3.0.2.jar:/opt/app/policy/lib/kafka-clients-3.6.1.jar:/opt/app/policy/lib/kie-api-8.40.1.Final.jar:/opt/app/policy/lib/kie-ci-8.40.1.Final.jar:/opt/app/policy/lib/kie-internal-8.40.1.Final.jar:/opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar:/opt/app/policy/lib/kotlin-reflect-1.9.23.jar:/opt/app/policy/lib/kotlin-stdlib-1.9.23.jar:/opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/app/policy/lib/logback-classic-1.4.14.jar:/opt/app/policy/lib/logback-core-1.4.14.jar:/opt/app/policy/lib/lombok-1.18.30.jar:/opt/app/policy/lib/lz4-java-1.8.0.jar:/opt/app/policy/lib/mariadb-java-client-3.3.3.jar:/opt/app/policy/lib/maven-artifact-3.8.6.jar:/opt/app/policy/lib/maven-builder-support-3.8.6.jar:/opt/app/policy/lib/maven-compat-3.8.6.jar:/opt/app/policy/lib/maven-core-3.8.6.jar:/opt/app/policy/lib/maven-model-3.8.6.jar:/opt/app/policy/lib/maven-model-builder-3.8.6.jar:/opt/app/policy/lib/maven-plugin-api-3.8.6.jar:/opt/app/policy/lib/maven-repository-metadata-3.8.6.jar:/opt/app/policy/lib/maven-resolver-api-1.6.3.jar:/opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar:/opt/app/policy/lib/maven-resolver-impl-1.6.3.jar:/opt/app/policy/lib/maven-resolver-provider-3.8.6.jar:/opt/app/policy/lib/maven-resolver-spi-1.6.3.jar:/opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar:/opt/app/policy/lib/maven-resolver-util-1.6.3.jar:/opt/app/policy/lib/maven-settings-3.8.6.jar:/opt/app/policy/lib/maven-settings-builder-3.8.6.jar:/opt/app/policy/lib/maven-shared-utils-3.3.4.jar:/opt/app/policy/lib/medeia-validator-core-1.1.1.jar:/opt/app/policy/lib/medeia-validator-gson-1.1.1.jar:/opt/app/policy/lib/mvel2-2.5.2.Final.jar:/opt/app/policy/lib/mxparser-1.2.2.jar:/opt/app/policy/lib/opentelemetry-api-1.25.0.jar:/opt/app/policy/lib/opentelemetry-context-1.25.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-1.25.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-2.6-1.25.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-common-1.25.0-alpha.jar:/opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar:/opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar:/opt/app/policy/lib/osgi-resource-locator-1.0.3.jar:/opt/app/policy/lib/plexus-cipher-2.0.jar:/opt/app/policy/lib/plexus-classworlds-2.6.0.jar:/opt/app/policy/lib/plexus-component-annotations-2.1.0.jar:/opt/app/policy/lib/plexus-interpolation-1.26.jar:/opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar:/opt/app/policy/lib/plexus-utils-3.5.0.jar:/opt/app/policy/lib/policy-core-2.1.3.jar:/opt/app/policy/lib/policy-domains-2.1.3.jar:/opt/app/policy/lib/policy-endpoints-2.1.3.jar:/opt/app/policy/lib/policy-management-2.1.3.jar:/opt/app/policy/lib/policy-models-base-3.1.3.jar:/opt/app/policy/lib/policy-models-dao-3.1.3.jar:/opt/app/policy/lib/policy-models-errors-3.1.3.jar:/opt/app/policy/lib/policy-models-examples-3.1.3.jar:/opt/app/policy/lib/policy-models-pdp-3.1.3.jar:/opt/app/policy/lib/policy-models-tosca-3.1.3.jar:/opt/app/policy/lib/policy-utils-2.1.3.jar:/opt/app/policy/lib/postgresql-42.7.2.jar:/opt/app/policy/lib/protobuf-java-3.22.0.jar:/opt/app/policy/lib/re2j-1.7.jar:/opt/app/policy/lib/simpleclient-0.16.0.jar:/opt/app/policy/lib/simpleclient_common-0.16.0.jar:/opt/app/policy/lib/simpleclient_hotspot-0.16.0.jar:/opt/app/policy/lib/simpleclient_logback-0.16.0.jar:/opt/app/policy/lib/simpleclient_servlet_common-0.16.0.jar:/opt/app/policy/lib/simpleclient_servlet_jakarta-0.16.0.jar:/opt/app/policy/lib/simpleclient_tracer_common-0.16.0.jar:/opt/app/policy/lib/simpleclient_tracer_otel-0.16.0.jar:/opt/app/policy/lib/simpleclient_tracer_otel_agent-0.16.0.jar:/opt/app/policy/lib/slf4j-api-2.0.12.jar:/opt/app/policy/lib/snakeyaml-2.2.jar:/opt/app/policy/lib/snappy-java-1.1.10.5.jar:/opt/app/policy/lib/swagger-annotations-2.2.20.jar:/opt/app/policy/lib/swagger-annotations-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-core-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-integration-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.20.jar:/opt/app/policy/lib/swagger-models-jakarta-2.2.20.jar:/opt/app/policy/lib/txw2-4.0.5.jar:/opt/app/policy/lib/utils-2.1.3.jar:/opt/app/policy/lib/waffle-jna-3.3.0.jar:/opt/app/policy/lib/wagon-http-3.5.1.jar:/opt/app/policy/lib/wagon-http-shared-3.5.1.jar:/opt/app/policy/lib/wagon-provider-api-3.5.1.jar:/opt/app/policy/lib/xmlpull-1.1.3.1.jar:/opt/app/policy/lib/xstream-1.4.20.jar:/opt/app/policy/lib/zstd-jni-1.5.5-1.jar '-Dlogback.configurationFile=config/logback.xml' org.onap.policy.drools.system.Main policy-drools-pdp | Oct 05, 2024 10:45:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.SwitchesApi cannot be instantiated and will be ignored. policy-drools-pdp | Oct 05, 2024 10:45:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.SwaggerApi cannot be instantiated and will be ignored. policy-drools-pdp | Oct 05, 2024 10:45:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.DefaultApi cannot be instantiated and will be ignored. policy-drools-pdp | Oct 05, 2024 10:45:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.EnvironmentApi cannot be instantiated and will be ignored. policy-drools-pdp | Oct 05, 2024 10:45:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.LifecycleApi cannot be instantiated and will be ignored. policy-drools-pdp | Oct 05, 2024 10:45:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.FeaturesApi cannot be instantiated and will be ignored. policy-drools-pdp | Oct 05, 2024 10:45:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.InputsApi cannot be instantiated and will be ignored. policy-drools-pdp | Oct 05, 2024 10:45:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.PropertiesApi cannot be instantiated and will be ignored. policy-drools-pdp | Oct 05, 2024 10:45:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.LegacyApi cannot be instantiated and will be ignored. policy-drools-pdp | Oct 05, 2024 10:45:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.TopicsApi cannot be instantiated and will be ignored. policy-drools-pdp | Oct 05, 2024 10:45:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.ControllersApi cannot be instantiated and will be ignored. policy-drools-pdp | Oct 05, 2024 10:45:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.ToolsApi cannot be instantiated and will be ignored. =================================== ======== Logs from pap ======== policy-pap | Waiting for mariadb port 3306... policy-pap | mariadb (172.17.0.3:3306) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.4:9092) open policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.6:6969) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | :: Spring Boot :: (v3.1.10) policy-pap | policy-pap | [2024-10-05T10:45:40.035+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-pap | [2024-10-05T10:45:40.098+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 34 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2024-10-05T10:45:40.099+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-pap | [2024-10-05T10:45:42.022+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2024-10-05T10:45:42.117+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 87 ms. Found 7 JPA repository interfaces. policy-pap | [2024-10-05T10:45:42.538+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-10-05T10:45:42.538+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-10-05T10:45:43.166+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-pap | [2024-10-05T10:45:43.175+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2024-10-05T10:45:43.176+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2024-10-05T10:45:43.177+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-pap | [2024-10-05T10:45:43.262+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2024-10-05T10:45:43.262+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3097 ms policy-pap | [2024-10-05T10:45:43.652+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2024-10-05T10:45:43.705+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final policy-pap | [2024-10-05T10:45:44.057+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2024-10-05T10:45:44.164+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@fd9ebde policy-pap | [2024-10-05T10:45:44.166+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2024-10-05T10:45:44.199+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect policy-pap | [2024-10-05T10:45:45.651+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] policy-pap | [2024-10-05T10:45:45.661+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2024-10-05T10:45:46.117+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-pap | [2024-10-05T10:45:46.508+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-pap | [2024-10-05T10:45:46.631+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository policy-pap | [2024-10-05T10:45:46.924+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 1bc68f55-3c38-4fc5-889a-f7dc8bd995b5 policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-10-05T10:45:47.095+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-10-05T10:45:47.096+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-10-05T10:45:47.096+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1728125147094 policy-pap | [2024-10-05T10:45:47.098+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-1, groupId=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-10-05T10:45:47.098+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-10-05T10:45:47.106+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-10-05T10:45:47.106+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-10-05T10:45:47.106+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1728125147106 policy-pap | [2024-10-05T10:45:47.106+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-10-05T10:45:47.410+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=drools, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Drools 1.0.0, onap.policies.native.drools.Controller 1.0.0, onap.policies.native.drools.Artifact 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2024-10-05T10:45:47.562+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2024-10-05T10:45:47.803+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@2ed84be9, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@400e741, org.springframework.security.web.context.SecurityContextHolderFilter@1fd35a92, org.springframework.security.web.header.HeaderWriterFilter@cea67b1, org.springframework.security.web.authentication.logout.LogoutFilter@30ed2a26, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@438c0aaf, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@5ced0537, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@6b630d4b, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3be369fc, org.springframework.security.web.access.ExceptionTranslationFilter@afb7b03, org.springframework.security.web.access.intercept.AuthorizationFilter@468f2a6f] policy-pap | [2024-10-05T10:45:48.677+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-pap | [2024-10-05T10:45:48.784+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2024-10-05T10:45:48.807+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-pap | [2024-10-05T10:45:48.827+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2024-10-05T10:45:48.827+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2024-10-05T10:45:48.828+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2024-10-05T10:45:48.828+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2024-10-05T10:45:48.828+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2024-10-05T10:45:48.829+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2024-10-05T10:45:48.829+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2024-10-05T10:45:48.830+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1cc81ea1 policy-pap | [2024-10-05T10:45:48.842+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-10-05T10:45:48.842+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 1bc68f55-3c38-4fc5-889a-f7dc8bd995b5 policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-10-05T10:45:48.848+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-10-05T10:45:48.848+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-10-05T10:45:48.848+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1728125148848 policy-pap | [2024-10-05T10:45:48.848+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3, groupId=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-10-05T10:45:48.850+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2024-10-05T10:45:48.850+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=c5ba0ce5-abdc-4d1d-a4eb-efbe0da28be6, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@72ce8a9b policy-pap | [2024-10-05T10:45:48.850+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=c5ba0ce5-abdc-4d1d-a4eb-efbe0da28be6, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-10-05T10:45:48.850+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-10-05T10:45:48.854+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-10-05T10:45:48.854+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-10-05T10:45:48.854+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1728125148854 policy-pap | [2024-10-05T10:45:48.854+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-10-05T10:45:48.855+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2024-10-05T10:45:48.856+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=c5ba0ce5-abdc-4d1d-a4eb-efbe0da28be6, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-10-05T10:45:48.856+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-10-05T10:45:48.856+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a6d0325d-fcef-43ee-b82e-66f5232a9915, alive=false, publisher=null]]: starting policy-pap | [2024-10-05T10:45:48.870+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-10-05T10:45:48.885+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2024-10-05T10:45:48.902+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-10-05T10:45:48.902+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-10-05T10:45:48.902+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1728125148901 policy-pap | [2024-10-05T10:45:48.902+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a6d0325d-fcef-43ee-b82e-66f5232a9915, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-10-05T10:45:48.902+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=34d6bfdc-55c6-41ff-841a-2d99dea3cb36, alive=false, publisher=null]]: starting policy-pap | [2024-10-05T10:45:48.903+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-10-05T10:45:48.905+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2024-10-05T10:45:48.909+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-10-05T10:45:48.909+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-10-05T10:45:48.909+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1728125148909 policy-pap | [2024-10-05T10:45:48.909+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=34d6bfdc-55c6-41ff-841a-2d99dea3cb36, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-10-05T10:45:48.909+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2024-10-05T10:45:48.909+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2024-10-05T10:45:48.910+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2024-10-05T10:45:48.911+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2024-10-05T10:45:48.913+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2024-10-05T10:45:48.913+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2024-10-05T10:45:48.913+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2024-10-05T10:45:48.927+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2024-10-05T10:45:48.927+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2024-10-05T10:45:48.928+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2024-10-05T10:45:48.929+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2024-10-05T10:45:48.932+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.6 seconds (process running for 10.21) policy-pap | [2024-10-05T10:45:49.368+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: E85j95xVQXmDZtJdQjKymw policy-pap | [2024-10-05T10:45:49.369+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 2 with epoch 0 policy-pap | [2024-10-05T10:45:49.370+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: E85j95xVQXmDZtJdQjKymw policy-pap | [2024-10-05T10:45:49.371+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2024-10-05T10:45:49.372+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: E85j95xVQXmDZtJdQjKymw policy-pap | [2024-10-05T10:45:49.372+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3, groupId=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5] Cluster ID: E85j95xVQXmDZtJdQjKymw policy-pap | [2024-10-05T10:45:49.374+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-10-05T10:45:49.374+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3, groupId=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-10-05T10:45:49.385+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3, groupId=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5] (Re-)joining group policy-pap | [2024-10-05T10:45:49.388+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-10-05T10:45:49.401+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3, groupId=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5] Request joining group due to: need to re-join with the given member-id: consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3-25b49f04-f485-4a5c-b427-86ae650a8f73 policy-pap | [2024-10-05T10:45:49.401+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3, groupId=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-10-05T10:45:49.401+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3, groupId=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5] (Re-)joining group policy-pap | [2024-10-05T10:45:49.401+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-18cf91c0-701a-489f-b802-8500a9b0800f policy-pap | [2024-10-05T10:45:49.402+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-10-05T10:45:49.402+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-10-05T10:45:52.405+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3, groupId=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5] Successfully joined group with generation Generation{generationId=1, memberId='consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3-25b49f04-f485-4a5c-b427-86ae650a8f73', protocol='range'} policy-pap | [2024-10-05T10:45:52.405+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-18cf91c0-701a-489f-b802-8500a9b0800f', protocol='range'} policy-pap | [2024-10-05T10:45:52.416+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3, groupId=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5] Finished assignment for group at generation 1: {consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3-25b49f04-f485-4a5c-b427-86ae650a8f73=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-10-05T10:45:52.416+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-18cf91c0-701a-489f-b802-8500a9b0800f=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-10-05T10:45:52.424+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3, groupId=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5] Successfully synced group in generation Generation{generationId=1, memberId='consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3-25b49f04-f485-4a5c-b427-86ae650a8f73', protocol='range'} policy-pap | [2024-10-05T10:45:52.424+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-18cf91c0-701a-489f-b802-8500a9b0800f', protocol='range'} policy-pap | [2024-10-05T10:45:52.424+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3, groupId=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-10-05T10:45:52.424+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-10-05T10:45:52.427+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3, groupId=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-10-05T10:45:52.427+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-10-05T10:45:52.433+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-10-05T10:45:52.433+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3, groupId=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-10-05T10:45:52.445+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-1bc68f55-3c38-4fc5-889a-f7dc8bd995b5-3, groupId=1bc68f55-3c38-4fc5-889a-f7dc8bd995b5] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-10-05T10:45:52.445+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. =================================== ======== Logs from zookeeper ======== zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2024-10-05 10:45:16,352] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-10-05 10:45:16,355] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-10-05 10:45:16,355] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-10-05 10:45:16,355] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-10-05 10:45:16,355] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-10-05 10:45:16,356] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-10-05 10:45:16,356] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-10-05 10:45:16,356] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-10-05 10:45:16,356] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2024-10-05 10:45:16,357] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2024-10-05 10:45:16,357] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-10-05 10:45:16,358] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-10-05 10:45:16,358] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-10-05 10:45:16,358] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-10-05 10:45:16,358] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-10-05 10:45:16,358] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2024-10-05 10:45:16,371] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@75c072cb (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2024-10-05 10:45:16,376] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2024-10-05 10:45:16,376] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2024-10-05 10:45:16,380] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-10-05 10:45:16,388] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,388] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,388] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,389] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,389] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,389] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,389] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,389] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,389] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,389] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:java.version=17.0.12 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/protobuf-java-3.23.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-raft-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-runtime-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/connect-json-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/netty-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-server-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-clients-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/scala-library-2.13.12.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-shell-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.110.Final.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.110.Final.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.110.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.12.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/trogdor-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-tools-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.7.1-ccs.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,390] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,391] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,391] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,391] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,391] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,391] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,391] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,391] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2024-10-05 10:45:16,392] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,392] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,394] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2024-10-05 10:45:16,394] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2024-10-05 10:45:16,395] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-10-05 10:45:16,395] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-10-05 10:45:16,395] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-10-05 10:45:16,395] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-10-05 10:45:16,395] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-10-05 10:45:16,395] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-10-05 10:45:16,397] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,399] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,399] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2024-10-05 10:45:16,399] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2024-10-05 10:45:16,399] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,421] INFO Logging initialized @421ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2024-10-05 10:45:16,468] WARN o.e.j.s.ServletContextHandler@f5958c9{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-10-05 10:45:16,468] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-10-05 10:45:16,482] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 17.0.12+7-LTS (org.eclipse.jetty.server.Server) zookeeper | [2024-10-05 10:45:16,501] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2024-10-05 10:45:16,501] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2024-10-05 10:45:16,502] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper | [2024-10-05 10:45:16,524] WARN ServletContext@o.e.j.s.ServletContextHandler@f5958c9{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2024-10-05 10:45:16,538] INFO Started o.e.j.s.ServletContextHandler@f5958c9{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-10-05 10:45:16,553] INFO Started ServerConnector@436813f3{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2024-10-05 10:45:16,553] INFO Started @557ms (org.eclipse.jetty.server.Server) zookeeper | [2024-10-05 10:45:16,554] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2024-10-05 10:45:16,560] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2024-10-05 10:45:16,560] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2024-10-05 10:45:16,561] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2024-10-05 10:45:16,562] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2024-10-05 10:45:16,572] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2024-10-05 10:45:16,572] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2024-10-05 10:45:16,572] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-10-05 10:45:16,572] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-10-05 10:45:16,580] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2024-10-05 10:45:16,581] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-10-05 10:45:16,584] INFO Snapshot loaded in 12 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-10-05 10:45:16,585] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-10-05 10:45:16,585] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-10-05 10:45:16,595] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2024-10-05 10:45:16,595] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2024-10-05 10:45:16,609] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2024-10-05 10:45:16,609] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2024-10-05 10:45:17,664] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) =================================== Tearing down containers... Container policy-drools-pdp Stopping Container policy-csit Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container policy-drools-pdp Stopped Container policy-drools-pdp Removing Container policy-drools-pdp Removed Container policy-pap Stopping Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container policy-api Stopping Container kafka Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container mariadb Stopping Container mariadb Stopped Container mariadb Removing Container mariadb Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2162 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-drools-pdp-newdelhi-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins15949056363089688217.sh ---> sysstat.sh [policy-drools-pdp-newdelhi-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins7326720140283454945.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-drools-pdp-newdelhi-project-csit-drools-pdp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-drools-pdp-newdelhi-project-csit-drools-pdp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-drools-pdp-newdelhi-project-csit-drools-pdp ']' + mkdir -p /w/workspace/policy-drools-pdp-newdelhi-project-csit-drools-pdp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-drools-pdp-newdelhi-project-csit-drools-pdp/archives/ [policy-drools-pdp-newdelhi-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins13542729162489213341.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-newdelhi-project-csit-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-y7p4 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-y7p4/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-drools-pdp-newdelhi-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins8341292658260387492.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-drools-pdp-newdelhi-project-csit-drools-pdp@tmp/config6526668231618165954tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-drools-pdp-newdelhi-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins5589404200837125110.sh ---> create-netrc.sh [policy-drools-pdp-newdelhi-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins18110931946419711959.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-newdelhi-project-csit-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-y7p4 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-y7p4/bin to PATH [policy-drools-pdp-newdelhi-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins9824334604048943221.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-drools-pdp-newdelhi-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins15648990874372941792.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-newdelhi-project-csit-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-y7p4 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-y7p4/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-drools-pdp-newdelhi-project-csit-drools-pdp] $ /bin/bash -l /tmp/jenkins1054705896533467047.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-newdelhi-project-csit-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-y7p4 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-y7p4/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-drools-pdp-newdelhi-project-csit-drools-pdp/139 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-74643 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 13G 143G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 868 25968 0 5329 30843 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:4b:14:32 brd ff:ff:ff:ff:ff:ff inet 10.30.107.119/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 86124sec preferred_lft 86124sec inet6 fe80::f816:3eff:fe4b:1432/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:9d:9b:76:14 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:9dff:fe9b:7614/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-74643) 10/05/24 _x86_64_ (8 CPU) 10:42:56 LINUX RESTART (8 CPU) 10:43:01 tps rtps wtps bread/s bwrtn/s 10:44:02 383.29 84.19 299.10 6162.75 67247.72 10:45:01 255.14 22.52 232.62 2437.42 96539.71 10:46:01 339.69 12.26 327.43 799.50 74822.50 10:47:01 133.13 0.98 132.14 47.33 35552.64 Average: 277.91 30.02 247.89 2361.59 68423.46 10:43:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 10:44:02 30168316 31641380 2770904 8.41 57480 1733944 1504596 4.43 930248 1565768 94932 10:45:01 27026000 31664972 5913220 17.95 119812 4708628 1719712 5.06 986580 4462988 1075820 10:46:01 24701772 29779476 8237448 25.01 146352 5074796 8114748 23.88 3111148 4564528 34968 10:47:01 26585144 31569836 6354076 19.29 158636 4975248 1634328 4.81 1382564 4454768 2500 Average: 27120308 31163916 5818912 17.67 120570 4123154 3243346 9.54 1602635 3762013 302055 10:43:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 10:44:02 ens3 328.17 235.20 1572.39 74.10 0.00 0.00 0.00 0.00 10:44:02 lo 1.73 1.73 0.17 0.17 0.00 0.00 0.00 0.00 10:44:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:45:01 ens3 1091.53 545.91 25422.70 47.99 0.00 0.00 0.00 0.00 10:45:01 lo 12.20 12.20 1.21 1.21 0.00 0.00 0.00 0.00 10:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:45:01 br-064aec70927b 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:46:01 veth6a4e8b3 4.67 5.70 0.75 0.84 0.00 0.00 0.00 0.00 10:46:01 veth3863f62 3.47 4.00 0.46 0.38 0.00 0.00 0.00 0.00 10:46:01 vetha55b9c7 3.95 4.05 0.49 0.47 0.00 0.00 0.00 0.00 10:46:01 veth479e33d 10.01 14.08 2.03 220.93 0.00 0.00 0.00 0.02 10:47:01 ens3 1541.94 864.67 27822.42 148.61 0.00 0.00 0.00 0.00 10:47:01 lo 20.96 20.96 2.01 2.01 0.00 0.00 0.00 0.00 10:47:01 docker0 12.23 16.83 2.06 286.58 0.00 0.00 0.00 0.00 Average: ens3 381.71 215.88 6982.85 37.19 0.00 0.00 0.00 0.00 Average: lo 4.51 4.51 0.44 0.44 0.00 0.00 0.00 0.00 Average: docker0 3.07 4.23 0.52 71.94 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-74643) 10/05/24 _x86_64_ (8 CPU) 10:42:56 LINUX RESTART (8 CPU) 10:43:01 CPU %user %nice %system %iowait %steal %idle 10:44:02 all 8.29 0.00 1.26 4.14 0.04 86.28 10:44:02 0 4.08 0.00 0.73 0.45 0.02 94.72 10:44:02 1 16.81 0.00 3.21 12.93 0.05 67.01 10:44:02 2 17.51 0.00 1.90 1.15 0.05 79.38 10:44:02 3 5.93 0.00 0.79 0.67 0.03 92.58 10:44:02 4 2.97 0.00 0.68 12.68 0.03 83.63 10:44:02 5 3.71 0.00 0.75 0.52 0.07 94.95 10:44:02 6 10.07 0.00 1.44 0.77 0.03 87.69 10:44:02 7 5.27 0.00 0.60 3.97 0.03 90.12 10:45:01 all 14.02 0.00 4.38 5.53 0.05 76.02 10:45:01 0 8.83 0.00 4.35 1.96 0.07 84.80 10:45:01 1 13.73 0.00 4.12 1.35 0.07 80.73 10:45:01 2 31.26 0.00 6.12 22.36 0.09 40.18 10:45:01 3 14.87 0.00 3.86 0.97 0.03 80.27 10:45:01 4 8.77 0.00 4.22 9.54 0.05 77.43 10:45:01 5 9.51 0.00 3.83 4.36 0.03 82.26 10:45:01 6 13.15 0.00 3.90 0.77 0.05 82.12 10:45:01 7 12.07 0.00 4.62 2.99 0.03 80.28 10:46:01 all 25.28 0.00 3.44 5.08 0.08 66.12 10:46:01 0 26.50 0.00 3.69 5.79 0.10 63.92 10:46:01 1 24.22 0.00 3.64 4.81 0.08 67.24 10:46:01 2 21.60 0.00 3.47 18.78 0.10 56.05 10:46:01 3 30.49 0.00 3.41 0.72 0.07 65.31 10:46:01 4 19.15 0.00 2.88 2.90 0.08 74.99 10:46:01 5 22.76 0.00 3.31 4.27 0.08 69.58 10:46:01 6 27.82 0.00 3.40 1.22 0.07 67.48 10:46:01 7 29.66 0.00 3.73 2.11 0.07 64.44 10:47:01 all 4.64 0.00 1.30 2.08 0.05 91.92 10:47:01 0 2.83 0.00 1.42 0.70 0.03 95.02 10:47:01 1 4.37 0.00 1.22 4.58 0.05 89.78 10:47:01 2 2.60 0.00 1.02 0.13 0.05 96.19 10:47:01 3 5.33 0.00 1.65 0.52 0.05 92.45 10:47:01 4 5.92 0.00 1.47 7.09 0.05 85.46 10:47:01 5 9.50 0.00 1.34 3.05 0.07 86.05 10:47:01 6 2.86 0.00 1.22 0.47 0.07 95.38 10:47:01 7 3.70 0.00 1.07 0.17 0.05 95.02 Average: all 13.04 0.00 2.59 4.20 0.06 80.11 Average: 0 10.57 0.00 2.54 2.23 0.05 84.61 Average: 1 14.78 0.00 3.04 5.94 0.06 76.18 Average: 2 18.18 0.00 3.11 10.54 0.07 68.09 Average: 3 14.13 0.00 2.42 0.72 0.05 82.69 Average: 4 9.20 0.00 2.30 8.05 0.05 80.40 Average: 5 11.37 0.00 2.30 3.04 0.06 83.23 Average: 6 13.47 0.00 2.48 0.81 0.05 83.19 Average: 7 12.68 0.00 2.49 2.31 0.05 82.48