Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/138377 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-21085 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-9VU7m84Q15T5/agent.2295 SSH_AGENT_PID=2297 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp@tmp/private_key_17366572002094807957.key (/w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp@tmp/private_key_17366572002094807957.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/77/138377/1 # timeout=30 > git rev-parse da7302b230b9a765fb93a32b2c9bae9c3f025fb7^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script Checking out Revision da7302b230b9a765fb93a32b2c9bae9c3f025fb7 (refs/changes/77/138377/1) > git config core.sparsecheckout # timeout=10 > git checkout -f da7302b230b9a765fb93a32b2c9bae9c3f025fb7 # timeout=30 Commit message: "Setting jaeger version for CSITs" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk 54d234de0d9260f610425cd496a52265a4082441 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins15978912552415859845.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-FKTr lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-FKTr/bin to PATH Generating Requirements File Python 3.10.6 pip 24.1.1 from /tmp/venv-FKTr/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.4.0 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.138 botocore==1.34.138 bs4==0.0.2 cachetools==5.3.3 certifi==2024.6.2 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.7.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.3 email_validator==2.2.0 filelock==3.15.4 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.31.0 httplib2==0.22.0 identify==2.5.36 idna==3.7 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.4 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.22.0 jsonschema-specifications==2023.12.1 keystoneauth1==5.6.0 kubernetes==30.1.0 lftools==0.37.10 lxml==5.2.2 MarkupSafe==2.1.5 msgpack==1.0.8 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 netifaces==0.11.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==3.2.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.1 oslo.config==9.4.0 oslo.context==5.5.0 oslo.i18n==6.3.0 oslo.log==6.0.0 oslo.serialization==5.4.0 oslo.utils==7.1.0 packaging==24.1 pbr==6.0.0 platformdirs==4.2.2 prettytable==3.10.0 pyasn1==0.6.0 pyasn1_modules==0.4.0 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.3.0 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.5.0 python-dateutil==2.9.0.post0 python-heatclient==3.5.0 python-jenkins==1.8.2 python-keystoneclient==5.4.0 python-magnumclient==4.5.0 python-novaclient==18.6.0 python-openstackclient==6.6.0 python-swiftclient==4.6.0 PyYAML==6.0.1 referencing==0.35.1 requests==2.32.3 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.18.1 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.2 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.2.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.5 tqdm==4.66.4 typing_extensions==4.12.2 tzdata==2024.1 urllib3==1.26.19 virtualenv==20.26.3 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/sh /tmp/jenkins10764446976498124277.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/sh -xe /tmp/jenkins3535992191915203227.sh + /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp/csit/run-project-csit.sh apex-pdp WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 60.0M 100 60.0M 0 0 149M 0 --:--:-- --:--:-- --:--:-- 149M Setting project configuration for: apex-pdp Configuring docker compose... Starting apex-pdp application with Grafana time="2024-07-03T14:25:30Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." kafka Pulling prometheus Pulling mariadb Pulling policy-db-migrator Pulling pap Pulling zookeeper Pulling api Pulling grafana Pulling apex-pdp Pulling simulator Pulling 31e352740f53 Pulling fs layer 257d54e26411 Pulling fs layer 215302b53935 Pulling fs layer eb2f448c7730 Pulling fs layer c8ee90c58894 Pulling fs layer e30cdb86c4f0 Pulling fs layer c990b7e46fc8 Pulling fs layer eb2f448c7730 Waiting c8ee90c58894 Waiting e30cdb86c4f0 Waiting c990b7e46fc8 Waiting 31e352740f53 Pulling fs layer 57703e441b07 Pulling fs layer 7138254c3790 Pulling fs layer 78f39bed0e83 Pulling fs layer 40796999d308 Pulling fs layer 14ddc757aae0 Pulling fs layer ebe1cd824584 Pulling fs layer d2893dc6732f Pulling fs layer a23a963fcebe Pulling fs layer 369dfa39565e Pulling fs layer 9146eb587aa8 Pulling fs layer a120f6888c1f Pulling fs layer 57703e441b07 Waiting 7138254c3790 Waiting 78f39bed0e83 Waiting 40796999d308 Waiting 14ddc757aae0 Waiting ebe1cd824584 Waiting d2893dc6732f Waiting a23a963fcebe Waiting 369dfa39565e Waiting 9146eb587aa8 Waiting a120f6888c1f Waiting 31e352740f53 Pulling fs layer 21c7cf7066d0 Pulling fs layer c3cc5e3d19ac Pulling fs layer 0d2280d71230 Pulling fs layer 984932e12fb0 Pulling fs layer 5687ac571232 Pulling fs layer deac262509a5 Pulling fs layer c3cc5e3d19ac Waiting 0d2280d71230 Waiting 984932e12fb0 Waiting 5687ac571232 Waiting deac262509a5 Waiting 21c7cf7066d0 Waiting 31e352740f53 Pulling fs layer 21c7cf7066d0 Pulling fs layer eb5e31f0ecf8 Pulling fs layer 4faab25371b2 Pulling fs layer 6b867d96d427 Pulling fs layer 93832cc54357 Pulling fs layer 4faab25371b2 Waiting 6b867d96d427 Waiting 93832cc54357 Waiting 21c7cf7066d0 Waiting eb5e31f0ecf8 Waiting 31e352740f53 Downloading [> ] 48.06kB/3.398MB 31e352740f53 Downloading [> ] 48.06kB/3.398MB 31e352740f53 Downloading [> ] 48.06kB/3.398MB 31e352740f53 Downloading [> ] 48.06kB/3.398MB 215302b53935 Downloading [==================================================>] 293B/293B 215302b53935 Verifying Checksum 215302b53935 Download complete 31e352740f53 Pulling fs layer e8bf24a82546 Pulling fs layer 154b803e2d93 Pulling fs layer e8bf24a82546 Waiting 31e352740f53 Downloading [> ] 48.06kB/3.398MB e4305231c991 Pulling fs layer f469048fbe8d Pulling fs layer c189e028fabb Pulling fs layer e4305231c991 Waiting c9bd119720e4 Pulling fs layer c189e028fabb Waiting c9bd119720e4 Waiting f469048fbe8d Waiting 257d54e26411 Downloading [> ] 539.6kB/73.93MB 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 3ecda1bfd07b Pulling fs layer ac9f4de4b762 Pulling fs layer ea63b2e6315f Pulling fs layer fbd390d3bd00 Pulling fs layer 9b1ac15ef728 Pulling fs layer 8682f304eb80 Pulling fs layer 5fbafe078afc Pulling fs layer 9fa9226be034 Waiting ac9f4de4b762 Waiting 1617e25568b2 Waiting 3ecda1bfd07b Waiting ea63b2e6315f Waiting fbd390d3bd00 Waiting 8682f304eb80 Waiting 7fb53fd2ae10 Pulling fs layer 5fbafe078afc Waiting 592798bd3683 Pulling fs layer 7fb53fd2ae10 Waiting 473fdc983780 Pulling fs layer 473fdc983780 Waiting 592798bd3683 Waiting 10ac4908093d Pulling fs layer 44779101e748 Pulling fs layer a721db3e3f3d Pulling fs layer 1850a929b84a Pulling fs layer 397a918c7da3 Pulling fs layer 806be17e856d Pulling fs layer 634de6c90876 Pulling fs layer 44779101e748 Waiting a721db3e3f3d Waiting 1850a929b84a Waiting 397a918c7da3 Waiting cd00854cfb1a Pulling fs layer 806be17e856d Waiting 634de6c90876 Waiting 10ac4908093d Waiting cd00854cfb1a Waiting eb2f448c7730 Downloading [=> ] 3.001kB/127kB eb2f448c7730 Download complete 4abcf2066143 Pulling fs layer c0e05c86127e Pulling fs layer 706651a94df6 Pulling fs layer 33e0a01314cc Pulling fs layer f8b444c6ff40 Pulling fs layer e6c38e6d3add Pulling fs layer 6ca01427385e Pulling fs layer e35e8e85e24d Pulling fs layer 4abcf2066143 Waiting d0bef95bc6b2 Pulling fs layer af860903a445 Pulling fs layer f8b444c6ff40 Waiting 33e0a01314cc Waiting e6c38e6d3add Waiting 6ca01427385e Waiting d0bef95bc6b2 Waiting e35e8e85e24d Waiting af860903a445 Waiting c0e05c86127e Waiting 706651a94df6 Waiting c8ee90c58894 Downloading [==================================================>] 1.329kB/1.329kB c8ee90c58894 Download complete 31e352740f53 Verifying Checksum 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB c990b7e46fc8 Downloading [==================================================>] 1.299kB/1.299kB c990b7e46fc8 Verifying Checksum c990b7e46fc8 Download complete e30cdb86c4f0 Downloading [> ] 539.6kB/98.32MB 57703e441b07 Downloading [> ] 539.6kB/73.93MB 257d54e26411 Downloading [====> ] 6.487MB/73.93MB 31e352740f53 Extracting [===============> ] 1.049MB/3.398MB 31e352740f53 Extracting [===============> ] 1.049MB/3.398MB 31e352740f53 Extracting [===============> ] 1.049MB/3.398MB 31e352740f53 Extracting [===============> ] 1.049MB/3.398MB 31e352740f53 Extracting [===============> ] 1.049MB/3.398MB e30cdb86c4f0 Downloading [===> ] 6.487MB/98.32MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 57703e441b07 Downloading [====> ] 6.487MB/73.93MB 257d54e26411 Downloading [========> ] 12.98MB/73.93MB 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete 57703e441b07 Downloading [=====> ] 7.568MB/73.93MB e30cdb86c4f0 Downloading [====> ] 9.19MB/98.32MB 257d54e26411 Downloading [=========> ] 13.52MB/73.93MB 57703e441b07 Downloading [==========> ] 15.68MB/73.93MB e30cdb86c4f0 Downloading [=========> ] 18.38MB/98.32MB 257d54e26411 Downloading [===============> ] 22.71MB/73.93MB 22ebf0e44c85 Pulling fs layer 00b33c871d26 Pulling fs layer 6b11e56702ad Pulling fs layer 53d69aa7d3fc Pulling fs layer a3ab11953ef9 Pulling fs layer 91ef9543149d Pulling fs layer 2ec4f59af178 Pulling fs layer 8b7e81cd5ef1 Pulling fs layer c52916c1316e Pulling fs layer 7a1cb9ad7f75 Pulling fs layer 6b11e56702ad Waiting 0a92c7dea7af Pulling fs layer 22ebf0e44c85 Waiting 53d69aa7d3fc Waiting 8b7e81cd5ef1 Waiting 91ef9543149d Waiting a3ab11953ef9 Waiting 2ec4f59af178 Waiting c52916c1316e Waiting 00b33c871d26 Waiting 7a1cb9ad7f75 Waiting 57703e441b07 Downloading [=============> ] 20MB/73.93MB e30cdb86c4f0 Downloading [===========> ] 23.25MB/98.32MB 257d54e26411 Downloading [=================> ] 25.41MB/73.93MB e30cdb86c4f0 Downloading [==============> ] 29.2MB/98.32MB 57703e441b07 Downloading [=================> ] 25.95MB/73.93MB 257d54e26411 Downloading [===================> ] 28.11MB/73.93MB 257d54e26411 Downloading [============================> ] 41.63MB/73.93MB 57703e441b07 Downloading [====================> ] 30.28MB/73.93MB e30cdb86c4f0 Downloading [=================> ] 34.06MB/98.32MB 257d54e26411 Downloading [=======================================> ] 57.85MB/73.93MB 57703e441b07 Downloading [========================> ] 36.76MB/73.93MB e30cdb86c4f0 Downloading [====================> ] 41.09MB/98.32MB 22ebf0e44c85 Pulling fs layer 00b33c871d26 Pulling fs layer 6b11e56702ad Pulling fs layer 53d69aa7d3fc Pulling fs layer a3ab11953ef9 Pulling fs layer 22ebf0e44c85 Waiting 6b11e56702ad Waiting 53d69aa7d3fc Waiting 00b33c871d26 Waiting a3ab11953ef9 Waiting 91ef9543149d Pulling fs layer 2ec4f59af178 Pulling fs layer 8b7e81cd5ef1 Pulling fs layer c52916c1316e Pulling fs layer d93f69e96600 Pulling fs layer 8b7e81cd5ef1 Waiting 91ef9543149d Waiting c52916c1316e Waiting bbb9d15c45a1 Pulling fs layer 2ec4f59af178 Waiting d93f69e96600 Waiting bbb9d15c45a1 Waiting 257d54e26411 Download complete 57703e441b07 Downloading [=============================> ] 44.33MB/73.93MB e30cdb86c4f0 Downloading [========================> ] 48.12MB/98.32MB 7138254c3790 Downloading [> ] 343kB/32.98MB 57703e441b07 Downloading [=====================================> ] 55.15MB/73.93MB e30cdb86c4f0 Downloading [=============================> ] 58.39MB/98.32MB 257d54e26411 Extracting [> ] 557.1kB/73.93MB 7138254c3790 Downloading [==========> ] 6.88MB/32.98MB 57703e441b07 Downloading [===============================================> ] 69.75MB/73.93MB e30cdb86c4f0 Downloading [====================================> ] 70.83MB/98.32MB 257d54e26411 Extracting [===> ] 4.456MB/73.93MB 57703e441b07 Verifying Checksum 57703e441b07 Download complete 7138254c3790 Downloading [===================> ] 13.07MB/32.98MB 78f39bed0e83 Downloading [==================================================>] 1.077kB/1.077kB 78f39bed0e83 Verifying Checksum 78f39bed0e83 Download complete 40796999d308 Downloading [============================> ] 3.003kB/5.325kB 40796999d308 Downloading [==================================================>] 5.325kB/5.325kB 40796999d308 Verifying Checksum 14ddc757aae0 Downloading [============================> ] 3.003kB/5.314kB 14ddc757aae0 Downloading [==================================================>] 5.314kB/5.314kB 14ddc757aae0 Download complete e30cdb86c4f0 Downloading [==========================================> ] 82.72MB/98.32MB ebe1cd824584 Downloading [==================================================>] 1.037kB/1.037kB ebe1cd824584 Download complete d2893dc6732f Downloading [==================================================>] 1.038kB/1.038kB d2893dc6732f Verifying Checksum d2893dc6732f Download complete 257d54e26411 Extracting [======> ] 8.913MB/73.93MB 57703e441b07 Extracting [> ] 557.1kB/73.93MB 7138254c3790 Downloading [==============================> ] 20.3MB/32.98MB a23a963fcebe Downloading [==========> ] 3.002kB/13.9kB a23a963fcebe Download complete 369dfa39565e Downloading [==========> ] 3.002kB/13.79kB 369dfa39565e Downloading [==================================================>] 13.79kB/13.79kB e30cdb86c4f0 Downloading [================================================> ] 96.24MB/98.32MB 369dfa39565e Verifying Checksum 369dfa39565e Download complete e30cdb86c4f0 Verifying Checksum e30cdb86c4f0 Download complete 9146eb587aa8 Downloading [==================================================>] 2.856kB/2.856kB 9146eb587aa8 Verifying Checksum 9146eb587aa8 Download complete a120f6888c1f Downloading [==================================================>] 2.864kB/2.864kB a120f6888c1f Verifying Checksum a120f6888c1f Download complete 257d54e26411 Extracting [=========> ] 13.37MB/73.93MB 57703e441b07 Extracting [==> ] 3.899MB/73.93MB 7138254c3790 Downloading [===========================================> ] 28.56MB/32.98MB c3cc5e3d19ac Downloading [==================================================>] 296B/296B c3cc5e3d19ac Verifying Checksum c3cc5e3d19ac Download complete 21c7cf7066d0 Downloading [> ] 539.6kB/73.93MB 21c7cf7066d0 Downloading [> ] 539.6kB/73.93MB 0d2280d71230 Downloading [=> ] 3.001kB/127.4kB 0d2280d71230 Downloading [==================================================>] 127.4kB/127.4kB 0d2280d71230 Download complete 7138254c3790 Verifying Checksum 7138254c3790 Download complete 984932e12fb0 Downloading [==================================================>] 1.147kB/1.147kB 984932e12fb0 Download complete deac262509a5 Downloading [==================================================>] 1.118kB/1.118kB deac262509a5 Verifying Checksum deac262509a5 Download complete eb5e31f0ecf8 Downloading [==================================================>] 305B/305B eb5e31f0ecf8 Verifying Checksum eb5e31f0ecf8 Download complete 5687ac571232 Downloading [> ] 539.6kB/91.54MB 257d54e26411 Extracting [===========> ] 17.27MB/73.93MB 57703e441b07 Extracting [====> ] 7.242MB/73.93MB 21c7cf7066d0 Downloading [=====> ] 7.568MB/73.93MB 21c7cf7066d0 Downloading [=====> ] 7.568MB/73.93MB 4faab25371b2 Downloading [> ] 539.6kB/158.6MB 257d54e26411 Extracting [===============> ] 22.84MB/73.93MB 5687ac571232 Downloading [===> ] 5.946MB/91.54MB 57703e441b07 Extracting [=======> ] 11.14MB/73.93MB 21c7cf7066d0 Downloading [========> ] 12.98MB/73.93MB 21c7cf7066d0 Downloading [========> ] 12.98MB/73.93MB 4faab25371b2 Downloading [> ] 2.162MB/158.6MB 257d54e26411 Extracting [===================> ] 28.41MB/73.93MB 5687ac571232 Downloading [======> ] 11.89MB/91.54MB 57703e441b07 Extracting [==========> ] 15.6MB/73.93MB 4faab25371b2 Downloading [==> ] 7.028MB/158.6MB 21c7cf7066d0 Downloading [============> ] 18.92MB/73.93MB 21c7cf7066d0 Downloading [============> ] 18.92MB/73.93MB 257d54e26411 Extracting [======================> ] 33.42MB/73.93MB 5687ac571232 Downloading [=========> ] 17.84MB/91.54MB 57703e441b07 Extracting [=============> ] 20.05MB/73.93MB 21c7cf7066d0 Downloading [================> ] 23.79MB/73.93MB 21c7cf7066d0 Downloading [================> ] 23.79MB/73.93MB 4faab25371b2 Downloading [===> ] 11.89MB/158.6MB 257d54e26411 Extracting [=========================> ] 37.32MB/73.93MB 5687ac571232 Downloading [============> ] 23.25MB/91.54MB 57703e441b07 Extracting [=================> ] 26.18MB/73.93MB 4faab25371b2 Downloading [=====> ] 18.38MB/158.6MB 21c7cf7066d0 Downloading [====================> ] 30.82MB/73.93MB 21c7cf7066d0 Downloading [====================> ] 30.82MB/73.93MB 257d54e26411 Extracting [============================> ] 41.78MB/73.93MB 57703e441b07 Extracting [=====================> ] 32.31MB/73.93MB 5687ac571232 Downloading [===============> ] 27.57MB/91.54MB 4faab25371b2 Downloading [========> ] 25.95MB/158.6MB 21c7cf7066d0 Downloading [=======================> ] 35.14MB/73.93MB 21c7cf7066d0 Downloading [=======================> ] 35.14MB/73.93MB 257d54e26411 Extracting [==============================> ] 45.68MB/73.93MB 57703e441b07 Extracting [=========================> ] 37.32MB/73.93MB 5687ac571232 Downloading [=================> ] 31.9MB/91.54MB 4faab25371b2 Downloading [=========> ] 30.82MB/158.6MB 21c7cf7066d0 Downloading [==========================> ] 39.47MB/73.93MB 21c7cf7066d0 Downloading [==========================> ] 39.47MB/73.93MB 257d54e26411 Extracting [==================================> ] 50.69MB/73.93MB 57703e441b07 Extracting [============================> ] 42.34MB/73.93MB 5687ac571232 Downloading [====================> ] 37.31MB/91.54MB 4faab25371b2 Downloading [===========> ] 35.14MB/158.6MB 21c7cf7066d0 Downloading [=============================> ] 43.79MB/73.93MB 21c7cf7066d0 Downloading [=============================> ] 43.79MB/73.93MB 257d54e26411 Extracting [=====================================> ] 55.15MB/73.93MB 57703e441b07 Extracting [===============================> ] 46.79MB/73.93MB 5687ac571232 Downloading [======================> ] 41.09MB/91.54MB 4faab25371b2 Downloading [============> ] 38.93MB/158.6MB 21c7cf7066d0 Downloading [================================> ] 47.58MB/73.93MB 21c7cf7066d0 Downloading [================================> ] 47.58MB/73.93MB 257d54e26411 Extracting [=======================================> ] 59.05MB/73.93MB 57703e441b07 Extracting [===================================> ] 51.81MB/73.93MB 5687ac571232 Downloading [========================> ] 44.87MB/91.54MB 4faab25371b2 Downloading [=============> ] 42.17MB/158.6MB 21c7cf7066d0 Downloading [==================================> ] 51.36MB/73.93MB 21c7cf7066d0 Downloading [==================================> ] 51.36MB/73.93MB 57703e441b07 Extracting [=====================================> ] 55.15MB/73.93MB 257d54e26411 Extracting [============================================> ] 65.18MB/73.93MB 5687ac571232 Downloading [==========================> ] 48.66MB/91.54MB 4faab25371b2 Downloading [==============> ] 45.96MB/158.6MB 21c7cf7066d0 Downloading [=====================================> ] 55.69MB/73.93MB 21c7cf7066d0 Downloading [=====================================> ] 55.69MB/73.93MB 57703e441b07 Extracting [========================================> ] 60.16MB/73.93MB 257d54e26411 Extracting [================================================> ] 71.86MB/73.93MB 5687ac571232 Downloading [============================> ] 52.44MB/91.54MB 4faab25371b2 Downloading [===============> ] 49.74MB/158.6MB 21c7cf7066d0 Downloading [========================================> ] 59.47MB/73.93MB 21c7cf7066d0 Downloading [========================================> ] 59.47MB/73.93MB 257d54e26411 Extracting [==================================================>] 73.93MB/73.93MB 57703e441b07 Extracting [============================================> ] 66.29MB/73.93MB 5687ac571232 Downloading [==============================> ] 56.23MB/91.54MB 4faab25371b2 Downloading [================> ] 53.53MB/158.6MB 257d54e26411 Pull complete 21c7cf7066d0 Downloading [==========================================> ] 62.72MB/73.93MB 21c7cf7066d0 Downloading [==========================================> ] 62.72MB/73.93MB 215302b53935 Extracting [==================================================>] 293B/293B 215302b53935 Extracting [==================================================>] 293B/293B 57703e441b07 Extracting [================================================> ] 71.86MB/73.93MB 5687ac571232 Downloading [================================> ] 59.47MB/91.54MB 4faab25371b2 Downloading [=================> ] 56.77MB/158.6MB 57703e441b07 Extracting [==================================================>] 73.93MB/73.93MB 21c7cf7066d0 Downloading [============================================> ] 66.5MB/73.93MB 21c7cf7066d0 Downloading [============================================> ] 66.5MB/73.93MB 215302b53935 Pull complete eb2f448c7730 Extracting [============> ] 32.77kB/127kB eb2f448c7730 Extracting [==================================================>] 127kB/127kB eb2f448c7730 Extracting [==================================================>] 127kB/127kB 5687ac571232 Downloading [==================================> ] 62.72MB/91.54MB 57703e441b07 Pull complete 4faab25371b2 Downloading [==================> ] 60.01MB/158.6MB eb2f448c7730 Pull complete c8ee90c58894 Extracting [==================================================>] 1.329kB/1.329kB c8ee90c58894 Extracting [==================================================>] 1.329kB/1.329kB 7138254c3790 Extracting [> ] 360.4kB/32.98MB 7138254c3790 Extracting [======> ] 4.325MB/32.98MB c8ee90c58894 Pull complete 7138254c3790 Extracting [===========> ] 7.569MB/32.98MB e30cdb86c4f0 Extracting [> ] 557.1kB/98.32MB 7138254c3790 Extracting [==================> ] 11.89MB/32.98MB e30cdb86c4f0 Extracting [=======> ] 14.48MB/98.32MB 7138254c3790 Extracting [=======================> ] 15.5MB/32.98MB e30cdb86c4f0 Extracting [=============> ] 27.3MB/98.32MB 7138254c3790 Extracting [============================> ] 19.1MB/32.98MB e30cdb86c4f0 Extracting [=====================> ] 42.89MB/98.32MB 5687ac571232 Downloading [==================================> ] 63.8MB/91.54MB 21c7cf7066d0 Downloading [===============================================> ] 69.75MB/73.93MB 21c7cf7066d0 Downloading [===============================================> ] 69.75MB/73.93MB 4faab25371b2 Downloading [===================> ] 60.55MB/158.6MB 7138254c3790 Extracting [================================> ] 21.63MB/32.98MB e30cdb86c4f0 Extracting [===========================> ] 54.03MB/98.32MB e30cdb86c4f0 Extracting [==================================> ] 67.96MB/98.32MB 7138254c3790 Extracting [====================================> ] 23.79MB/32.98MB e30cdb86c4f0 Extracting [=========================================> ] 80.77MB/98.32MB 7138254c3790 Extracting [========================================> ] 26.67MB/32.98MB e30cdb86c4f0 Extracting [================================================> ] 95.81MB/98.32MB e30cdb86c4f0 Extracting [==================================================>] 98.32MB/98.32MB 7138254c3790 Extracting [===========================================> ] 28.48MB/32.98MB 21c7cf7066d0 Downloading [=================================================> ] 73.53MB/73.93MB 21c7cf7066d0 Downloading [=================================================> ] 73.53MB/73.93MB 4faab25371b2 Downloading [====================> ] 63.8MB/158.6MB 5687ac571232 Downloading [====================================> ] 67.58MB/91.54MB e30cdb86c4f0 Pull complete c990b7e46fc8 Extracting [==================================================>] 1.299kB/1.299kB c990b7e46fc8 Extracting [==================================================>] 1.299kB/1.299kB 21c7cf7066d0 Verifying Checksum 21c7cf7066d0 Download complete 21c7cf7066d0 Verifying Checksum 21c7cf7066d0 Download complete 7138254c3790 Extracting [=============================================> ] 29.92MB/32.98MB 6b867d96d427 Downloading [==================================================>] 1.153kB/1.153kB 6b867d96d427 Verifying Checksum 6b867d96d427 Download complete 93832cc54357 Downloading [==================================================>] 1.127kB/1.127kB 93832cc54357 Download complete 4faab25371b2 Downloading [=======================> ] 72.99MB/158.6MB 5687ac571232 Downloading [==========================================> ] 77.31MB/91.54MB 7138254c3790 Extracting [===============================================> ] 31.36MB/32.98MB e8bf24a82546 Downloading [> ] 539.6kB/180.3MB 7138254c3790 Extracting [==================================================>] 32.98MB/32.98MB 21c7cf7066d0 Extracting [> ] 557.1kB/73.93MB 21c7cf7066d0 Extracting [> ] 557.1kB/73.93MB c990b7e46fc8 Pull complete pap Pulled 5687ac571232 Downloading [===============================================> ] 86.51MB/91.54MB 4faab25371b2 Downloading [=========================> ] 82.18MB/158.6MB 7138254c3790 Pull complete 78f39bed0e83 Extracting [==================================================>] 1.077kB/1.077kB 78f39bed0e83 Extracting [==================================================>] 1.077kB/1.077kB e8bf24a82546 Downloading [=> ] 4.324MB/180.3MB 21c7cf7066d0 Extracting [==> ] 3.899MB/73.93MB 21c7cf7066d0 Extracting [==> ] 3.899MB/73.93MB 5687ac571232 Verifying Checksum 5687ac571232 Download complete 154b803e2d93 Downloading [=> ] 3.002kB/84.13kB 154b803e2d93 Download complete 4faab25371b2 Downloading [=============================> ] 92.99MB/158.6MB e4305231c991 Downloading [==================================================>] 92B/92B e4305231c991 Download complete e8bf24a82546 Downloading [===> ] 11.35MB/180.3MB 21c7cf7066d0 Extracting [======> ] 8.913MB/73.93MB 21c7cf7066d0 Extracting [======> ] 8.913MB/73.93MB 78f39bed0e83 Pull complete 40796999d308 Extracting [==================================================>] 5.325kB/5.325kB 40796999d308 Extracting [==================================================>] 5.325kB/5.325kB f469048fbe8d Downloading [==================================================>] 92B/92B f469048fbe8d Verifying Checksum f469048fbe8d Download complete 4faab25371b2 Downloading [===============================> ] 100.6MB/158.6MB c189e028fabb Downloading [==================================================>] 300B/300B c189e028fabb Verifying Checksum c189e028fabb Download complete e8bf24a82546 Downloading [====> ] 17.3MB/180.3MB 21c7cf7066d0 Extracting [=========> ] 13.37MB/73.93MB 21c7cf7066d0 Extracting [=========> ] 13.37MB/73.93MB c9bd119720e4 Downloading [> ] 539.6kB/246.3MB 40796999d308 Pull complete 14ddc757aae0 Extracting [==================================================>] 5.314kB/5.314kB 14ddc757aae0 Extracting [==================================================>] 5.314kB/5.314kB 4faab25371b2 Downloading [===================================> ] 113MB/158.6MB 21c7cf7066d0 Extracting [============> ] 18.38MB/73.93MB 21c7cf7066d0 Extracting [============> ] 18.38MB/73.93MB e8bf24a82546 Downloading [=======> ] 27.57MB/180.3MB c9bd119720e4 Downloading [> ] 3.243MB/246.3MB 14ddc757aae0 Pull complete ebe1cd824584 Extracting [==================================================>] 1.037kB/1.037kB ebe1cd824584 Extracting [==================================================>] 1.037kB/1.037kB 4faab25371b2 Downloading [=======================================> ] 126.5MB/158.6MB e8bf24a82546 Downloading [========> ] 31.9MB/180.3MB 21c7cf7066d0 Extracting [================> ] 24.51MB/73.93MB 21c7cf7066d0 Extracting [================> ] 24.51MB/73.93MB c9bd119720e4 Downloading [=> ] 6.487MB/246.3MB ebe1cd824584 Pull complete d2893dc6732f Extracting [==================================================>] 1.038kB/1.038kB 4faab25371b2 Downloading [=============================================> ] 144.9MB/158.6MB d2893dc6732f Extracting [==================================================>] 1.038kB/1.038kB 21c7cf7066d0 Extracting [====================> ] 30.08MB/73.93MB 21c7cf7066d0 Extracting [====================> ] 30.08MB/73.93MB e8bf24a82546 Downloading [==========> ] 37.85MB/180.3MB 4faab25371b2 Verifying Checksum 4faab25371b2 Download complete c9bd119720e4 Downloading [=> ] 9.19MB/246.3MB 9fa9226be034 Downloading [> ] 15.3kB/783kB d2893dc6732f Pull complete a23a963fcebe Extracting [==================================================>] 13.9kB/13.9kB a23a963fcebe Extracting [==================================================>] 13.9kB/13.9kB 9fa9226be034 Downloading [==================================================>] 783kB/783kB 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 21c7cf7066d0 Extracting [========================> ] 35.65MB/73.93MB 21c7cf7066d0 Extracting [========================> ] 35.65MB/73.93MB e8bf24a82546 Downloading [============> ] 43.79MB/180.3MB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB c9bd119720e4 Downloading [==> ] 14.06MB/246.3MB 3ecda1bfd07b Downloading [> ] 539.6kB/55.21MB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB e8bf24a82546 Downloading [=============> ] 49.2MB/180.3MB 21c7cf7066d0 Extracting [==========================> ] 38.99MB/73.93MB 21c7cf7066d0 Extracting [==========================> ] 38.99MB/73.93MB a23a963fcebe Pull complete 369dfa39565e Extracting [==================================================>] 13.79kB/13.79kB 369dfa39565e Extracting [==================================================>] 13.79kB/13.79kB 9fa9226be034 Pull complete 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 21c7cf7066d0 Extracting [===========================> ] 41.22MB/73.93MB 21c7cf7066d0 Extracting [===========================> ] 41.22MB/73.93MB 369dfa39565e Pull complete 9146eb587aa8 Extracting [==================================================>] 2.856kB/2.856kB 9146eb587aa8 Extracting [==================================================>] 2.856kB/2.856kB 21c7cf7066d0 Extracting [============================> ] 42.34MB/73.93MB 21c7cf7066d0 Extracting [============================> ] 42.34MB/73.93MB e8bf24a82546 Downloading [==============> ] 50.82MB/180.3MB 3ecda1bfd07b Downloading [==> ] 2.702MB/55.21MB c9bd119720e4 Downloading [===> ] 18.38MB/246.3MB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 9146eb587aa8 Pull complete a120f6888c1f Extracting [==================================================>] 2.864kB/2.864kB a120f6888c1f Extracting [==================================================>] 2.864kB/2.864kB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB e8bf24a82546 Downloading [=================> ] 63.26MB/180.3MB c9bd119720e4 Downloading [=====> ] 24.87MB/246.3MB 3ecda1bfd07b Downloading [======> ] 7.028MB/55.21MB 21c7cf7066d0 Extracting [==============================> ] 45.12MB/73.93MB 21c7cf7066d0 Extracting [==============================> ] 45.12MB/73.93MB e8bf24a82546 Downloading [=====================> ] 77.86MB/180.3MB 3ecda1bfd07b Downloading [===========> ] 12.98MB/55.21MB c9bd119720e4 Downloading [======> ] 30.82MB/246.3MB 21c7cf7066d0 Extracting [================================> ] 48.46MB/73.93MB 21c7cf7066d0 Extracting [================================> ] 48.46MB/73.93MB 1617e25568b2 Pull complete a120f6888c1f Pull complete policy-db-migrator Pulled 21c7cf7066d0 Extracting [====================================> ] 54.59MB/73.93MB 21c7cf7066d0 Extracting [====================================> ] 54.59MB/73.93MB 21c7cf7066d0 Extracting [=========================================> ] 61.28MB/73.93MB 21c7cf7066d0 Extracting [=========================================> ] 61.28MB/73.93MB 21c7cf7066d0 Extracting [===============================================> ] 69.63MB/73.93MB 21c7cf7066d0 Extracting [===============================================> ] 69.63MB/73.93MB 21c7cf7066d0 Extracting [==================================================>] 73.93MB/73.93MB 21c7cf7066d0 Extracting [==================================================>] 73.93MB/73.93MB 21c7cf7066d0 Pull complete 21c7cf7066d0 Pull complete c3cc5e3d19ac Extracting [==================================================>] 296B/296B eb5e31f0ecf8 Extracting [==================================================>] 305B/305B c3cc5e3d19ac Extracting [==================================================>] 296B/296B eb5e31f0ecf8 Extracting [==================================================>] 305B/305B c3cc5e3d19ac Pull complete eb5e31f0ecf8 Pull complete 0d2280d71230 Extracting [============> ] 32.77kB/127.4kB 0d2280d71230 Extracting [==================================================>] 127.4kB/127.4kB 4faab25371b2 Extracting [> ] 557.1kB/158.6MB 0d2280d71230 Pull complete 984932e12fb0 Extracting [==================================================>] 1.147kB/1.147kB 984932e12fb0 Extracting [==================================================>] 1.147kB/1.147kB 4faab25371b2 Extracting [=====> ] 16.71MB/158.6MB 984932e12fb0 Pull complete 5687ac571232 Extracting [> ] 557.1kB/91.54MB 4faab25371b2 Extracting [=========> ] 28.97MB/158.6MB 5687ac571232 Extracting [=====> ] 10.58MB/91.54MB 4faab25371b2 Extracting [==============> ] 45.12MB/158.6MB 5687ac571232 Extracting [=============> ] 24.51MB/91.54MB 4faab25371b2 Extracting [=================> ] 55.15MB/158.6MB 5687ac571232 Extracting [======================> ] 40.67MB/91.54MB 4faab25371b2 Extracting [=====================> ] 69.63MB/158.6MB 5687ac571232 Extracting [============================> ] 52.92MB/91.54MB 4faab25371b2 Extracting [=========================> ] 80.22MB/158.6MB 5687ac571232 Extracting [======================================> ] 70.19MB/91.54MB 4faab25371b2 Extracting [=============================> ] 93.59MB/158.6MB 5687ac571232 Extracting [================================================> ] 88.57MB/91.54MB 5687ac571232 Extracting [==================================================>] 91.54MB/91.54MB 4faab25371b2 Extracting [=================================> ] 107MB/158.6MB 5687ac571232 Pull complete deac262509a5 Extracting [==================================================>] 1.118kB/1.118kB deac262509a5 Extracting [==================================================>] 1.118kB/1.118kB 4faab25371b2 Extracting [======================================> ] 121.4MB/158.6MB 4faab25371b2 Extracting [=======================================> ] 124.8MB/158.6MB deac262509a5 Pull complete api Pulled 4faab25371b2 Extracting [===========================================> ] 136.5MB/158.6MB 4faab25371b2 Extracting [===============================================> ] 152.1MB/158.6MB 4faab25371b2 Extracting [==================================================>] 158.6MB/158.6MB 4faab25371b2 Pull complete 6b867d96d427 Extracting [==================================================>] 1.153kB/1.153kB 6b867d96d427 Extracting [==================================================>] 1.153kB/1.153kB 6b867d96d427 Pull complete 93832cc54357 Extracting [==================================================>] 1.127kB/1.127kB 93832cc54357 Extracting [==================================================>] 1.127kB/1.127kB e8bf24a82546 Downloading [=========================> ] 90.83MB/180.3MB c9bd119720e4 Downloading [=======> ] 36.76MB/246.3MB 3ecda1bfd07b Downloading [=================> ] 18.92MB/55.21MB 93832cc54357 Pull complete e8bf24a82546 Downloading [===========================> ] 100MB/180.3MB c9bd119720e4 Downloading [========> ] 44.33MB/246.3MB 3ecda1bfd07b Downloading [========================> ] 27.03MB/55.21MB simulator Pulled e8bf24a82546 Downloading [==============================> ] 109.2MB/180.3MB c9bd119720e4 Downloading [=========> ] 49.2MB/246.3MB 3ecda1bfd07b Downloading [===========================> ] 30.28MB/55.21MB e8bf24a82546 Downloading [=================================> ] 122.2MB/180.3MB c9bd119720e4 Downloading [===========> ] 58.93MB/246.3MB 3ecda1bfd07b Downloading [====================================> ] 40.01MB/55.21MB e8bf24a82546 Downloading [======================================> ] 137.9MB/180.3MB c9bd119720e4 Downloading [==============> ] 70.29MB/246.3MB 3ecda1bfd07b Downloading [==============================================> ] 50.82MB/55.21MB 3ecda1bfd07b Verifying Checksum 3ecda1bfd07b Download complete ac9f4de4b762 Downloading [> ] 506.8kB/50.13MB e8bf24a82546 Downloading [==========================================> ] 153.5MB/180.3MB c9bd119720e4 Downloading [================> ] 82.72MB/246.3MB 3ecda1bfd07b Extracting [> ] 557.1kB/55.21MB ac9f4de4b762 Downloading [======> ] 6.094MB/50.13MB e8bf24a82546 Downloading [===============================================> ] 170.9MB/180.3MB c9bd119720e4 Downloading [===================> ] 95.7MB/246.3MB 3ecda1bfd07b Extracting [=====> ] 5.571MB/55.21MB e8bf24a82546 Verifying Checksum e8bf24a82546 Download complete ac9f4de4b762 Downloading [==========> ] 10.66MB/50.13MB ea63b2e6315f Downloading [==================================================>] 605B/605B ea63b2e6315f Verifying Checksum ea63b2e6315f Download complete fbd390d3bd00 Downloading [==================================================>] 2.675kB/2.675kB fbd390d3bd00 Download complete c9bd119720e4 Downloading [=====================> ] 106.5MB/246.3MB 9b1ac15ef728 Downloading [================================================> ] 3.011kB/3.087kB 9b1ac15ef728 Downloading [==================================================>] 3.087kB/3.087kB 9b1ac15ef728 Verifying Checksum 9b1ac15ef728 Download complete 3ecda1bfd07b Extracting [========> ] 9.47MB/55.21MB 8682f304eb80 Downloading [=====================================> ] 3.011kB/4.023kB 8682f304eb80 Downloading [==================================================>] 4.023kB/4.023kB 8682f304eb80 Verifying Checksum 8682f304eb80 Download complete 5fbafe078afc Downloading [==================================================>] 1.44kB/1.44kB 5fbafe078afc Verifying Checksum 5fbafe078afc Download complete e8bf24a82546 Extracting [> ] 557.1kB/180.3MB ac9f4de4b762 Downloading [===================> ] 19.3MB/50.13MB c9bd119720e4 Downloading [=========================> ] 124.4MB/246.3MB 7fb53fd2ae10 Downloading [=> ] 3.009kB/138kB 7fb53fd2ae10 Downloading [==================================================>] 138kB/138kB 7fb53fd2ae10 Download complete 592798bd3683 Downloading [==================================================>] 100B/100B 592798bd3683 Verifying Checksum 592798bd3683 Download complete 3ecda1bfd07b Extracting [=============> ] 14.48MB/55.21MB 473fdc983780 Downloading [==================================================>] 721B/721B 473fdc983780 Verifying Checksum 473fdc983780 Download complete 10ac4908093d Downloading [> ] 310.2kB/30.43MB e8bf24a82546 Extracting [=> ] 4.456MB/180.3MB ac9f4de4b762 Downloading [============================> ] 28.95MB/50.13MB c9bd119720e4 Downloading [============================> ] 139MB/246.3MB 3ecda1bfd07b Extracting [==================> ] 20.61MB/55.21MB 10ac4908093d Downloading [=======> ] 4.357MB/30.43MB e8bf24a82546 Extracting [====> ] 14.48MB/180.3MB ac9f4de4b762 Downloading [===================================> ] 36.06MB/50.13MB c9bd119720e4 Downloading [===============================> ] 154.1MB/246.3MB 3ecda1bfd07b Extracting [======================> ] 25.07MB/55.21MB 10ac4908093d Downloading [================> ] 9.96MB/30.43MB e8bf24a82546 Extracting [========> ] 28.97MB/180.3MB ac9f4de4b762 Downloading [============================================> ] 44.19MB/50.13MB c9bd119720e4 Downloading [==================================> ] 171.4MB/246.3MB ac9f4de4b762 Verifying Checksum ac9f4de4b762 Download complete 3ecda1bfd07b Extracting [=============================> ] 32.31MB/55.21MB 10ac4908093d Downloading [===========================> ] 16.81MB/30.43MB e8bf24a82546 Extracting [===========> ] 42.89MB/180.3MB 44779101e748 Downloading [==================================================>] 1.744kB/1.744kB 44779101e748 Verifying Checksum 44779101e748 Download complete c9bd119720e4 Downloading [======================================> ] 189.2MB/246.3MB a721db3e3f3d Downloading [> ] 64.45kB/5.526MB 3ecda1bfd07b Extracting [========================================> ] 44.56MB/55.21MB e8bf24a82546 Extracting [===============> ] 57.38MB/180.3MB 10ac4908093d Downloading [=======================================> ] 24.28MB/30.43MB c9bd119720e4 Downloading [=========================================> ] 202.8MB/246.3MB a721db3e3f3d Downloading [=================================> ] 3.734MB/5.526MB a721db3e3f3d Verifying Checksum a721db3e3f3d Download complete 10ac4908093d Download complete 3ecda1bfd07b Extracting [=================================================> ] 55.15MB/55.21MB e8bf24a82546 Extracting [==================> ] 67.4MB/180.3MB 397a918c7da3 Download complete 3ecda1bfd07b Extracting [==================================================>] 55.21MB/55.21MB 1850a929b84a Downloading [==================================================>] 149B/149B 1850a929b84a Verifying Checksum 1850a929b84a Download complete c9bd119720e4 Downloading [============================================> ] 219.5MB/246.3MB 634de6c90876 Downloading [===========================================> ] 3.011kB/3.49kB 634de6c90876 Download complete 3ecda1bfd07b Pull complete e8bf24a82546 Extracting [=====================> ] 75.76MB/180.3MB cd00854cfb1a Downloading [=====================> ] 3.011kB/6.971kB cd00854cfb1a Downloading [==================================================>] 6.971kB/6.971kB cd00854cfb1a Verifying Checksum cd00854cfb1a Download complete c9bd119720e4 Downloading [===============================================> ] 233MB/246.3MB 10ac4908093d Extracting [> ] 327.7kB/30.43MB 806be17e856d Downloading [> ] 539.6kB/89.72MB ac9f4de4b762 Extracting [> ] 524.3kB/50.13MB 4abcf2066143 Downloading [> ] 48.06kB/3.409MB c9bd119720e4 Verifying Checksum c9bd119720e4 Download complete e8bf24a82546 Extracting [=======================> ] 84.67MB/180.3MB 10ac4908093d Extracting [=====> ] 3.604MB/30.43MB c0e05c86127e Downloading [==================================================>] 141B/141B c0e05c86127e Verifying Checksum c0e05c86127e Download complete ac9f4de4b762 Extracting [====> ] 4.719MB/50.13MB 4abcf2066143 Downloading [=======================> ] 1.621MB/3.409MB 806be17e856d Downloading [=> ] 2.162MB/89.72MB 706651a94df6 Downloading [> ] 31.68kB/3.162MB 4abcf2066143 Verifying Checksum 4abcf2066143 Download complete 4abcf2066143 Extracting [> ] 65.54kB/3.409MB e8bf24a82546 Extracting [=========================> ] 90.24MB/180.3MB 33e0a01314cc Downloading [> ] 48.06kB/4.333MB 10ac4908093d Extracting [============> ] 7.537MB/30.43MB ac9f4de4b762 Extracting [=======> ] 7.864MB/50.13MB 806be17e856d Downloading [===> ] 6.487MB/89.72MB 706651a94df6 Downloading [=============================> ] 1.867MB/3.162MB 4abcf2066143 Extracting [=====> ] 393.2kB/3.409MB 706651a94df6 Verifying Checksum 706651a94df6 Download complete e8bf24a82546 Extracting [==========================> ] 94.14MB/180.3MB f8b444c6ff40 Downloading [===> ] 3.01kB/47.97kB f8b444c6ff40 Downloading [==================================================>] 47.97kB/47.97kB f8b444c6ff40 Verifying Checksum f8b444c6ff40 Download complete 33e0a01314cc Downloading [==========================> ] 2.26MB/4.333MB 10ac4908093d Extracting [================> ] 9.83MB/30.43MB e6c38e6d3add Downloading [======> ] 3.01kB/23.82kB e6c38e6d3add Downloading [==================================================>] 23.82kB/23.82kB e6c38e6d3add Verifying Checksum e6c38e6d3add Download complete 806be17e856d Downloading [=======> ] 13.52MB/89.72MB ac9f4de4b762 Extracting [==========> ] 11.01MB/50.13MB 4abcf2066143 Extracting [==================================================>] 3.409MB/3.409MB 6ca01427385e Downloading [> ] 539.6kB/61.48MB 33e0a01314cc Verifying Checksum 33e0a01314cc Download complete e8bf24a82546 Extracting [===========================> ] 98.04MB/180.3MB 10ac4908093d Extracting [======================> ] 13.76MB/30.43MB 4abcf2066143 Pull complete 806be17e856d Downloading [============> ] 22.17MB/89.72MB c0e05c86127e Extracting [==================================================>] 141B/141B c0e05c86127e Extracting [==================================================>] 141B/141B ac9f4de4b762 Extracting [==============> ] 14.68MB/50.13MB 6ca01427385e Downloading [===> ] 3.784MB/61.48MB e35e8e85e24d Downloading [> ] 506.8kB/50.55MB e8bf24a82546 Extracting [============================> ] 101.9MB/180.3MB 10ac4908093d Extracting [===============================> ] 19.01MB/30.43MB 806be17e856d Downloading [================> ] 30.28MB/89.72MB ac9f4de4b762 Extracting [=================> ] 17.3MB/50.13MB 6ca01427385e Downloading [=======> ] 9.731MB/61.48MB e35e8e85e24d Downloading [==> ] 2.031MB/50.55MB e8bf24a82546 Extracting [=============================> ] 108.1MB/180.3MB 10ac4908093d Extracting [======================================> ] 23.59MB/30.43MB c0e05c86127e Pull complete 706651a94df6 Extracting [> ] 32.77kB/3.162MB 806be17e856d Downloading [====================> ] 37.31MB/89.72MB ac9f4de4b762 Extracting [===================> ] 19.4MB/50.13MB 6ca01427385e Downloading [==========> ] 13.52MB/61.48MB e35e8e85e24d Downloading [=====> ] 5.586MB/50.55MB e8bf24a82546 Extracting [==============================> ] 111.4MB/180.3MB 10ac4908093d Extracting [===========================================> ] 26.21MB/30.43MB 706651a94df6 Extracting [=======> ] 458.8kB/3.162MB 806be17e856d Downloading [=============================> ] 52.98MB/89.72MB ac9f4de4b762 Extracting [=======================> ] 23.07MB/50.13MB 6ca01427385e Downloading [================> ] 20MB/61.48MB e35e8e85e24d Downloading [===============> ] 15.24MB/50.55MB 10ac4908093d Extracting [==============================================> ] 28.18MB/30.43MB 706651a94df6 Extracting [===============================================> ] 2.982MB/3.162MB 806be17e856d Downloading [================================> ] 57.85MB/89.72MB e8bf24a82546 Extracting [===============================> ] 114.8MB/180.3MB ac9f4de4b762 Extracting [===========================> ] 27.26MB/50.13MB 6ca01427385e Downloading [====================> ] 25.41MB/61.48MB 706651a94df6 Extracting [==================================================>] 3.162MB/3.162MB e35e8e85e24d Downloading [======================> ] 22.85MB/50.55MB 10ac4908093d Extracting [================================================> ] 29.49MB/30.43MB 806be17e856d Downloading [====================================> ] 65.42MB/89.72MB e8bf24a82546 Extracting [================================> ] 118.1MB/180.3MB ac9f4de4b762 Extracting [======================================> ] 38.27MB/50.13MB 6ca01427385e Downloading [=========================> ] 31.9MB/61.48MB e8bf24a82546 Extracting [=================================> ] 122.6MB/180.3MB e35e8e85e24d Downloading [===============================> ] 31.49MB/50.55MB 806be17e856d Downloading [======================================> ] 68.66MB/89.72MB 706651a94df6 Pull complete 33e0a01314cc Extracting [> ] 65.54kB/4.333MB 10ac4908093d Extracting [==================================================>] 30.43MB/30.43MB 6ca01427385e Downloading [==============================> ] 37.85MB/61.48MB ac9f4de4b762 Extracting [=================================================> ] 49.28MB/50.13MB e8bf24a82546 Extracting [==================================> ] 125.9MB/180.3MB 806be17e856d Downloading [============================================> ] 79.48MB/89.72MB e35e8e85e24d Downloading [=========================================> ] 41.65MB/50.55MB 33e0a01314cc Extracting [===> ] 262.1kB/4.333MB ac9f4de4b762 Extracting [==================================================>] 50.13MB/50.13MB 10ac4908093d Pull complete 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 6ca01427385e Downloading [====================================> ] 44.87MB/61.48MB 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB ac9f4de4b762 Pull complete ea63b2e6315f Extracting [==================================================>] 605B/605B ea63b2e6315f Extracting [==================================================>] 605B/605B e8bf24a82546 Extracting [===================================> ] 128.1MB/180.3MB e35e8e85e24d Downloading [================================================> ] 48.76MB/50.55MB 806be17e856d Downloading [===============================================> ] 85.97MB/89.72MB 33e0a01314cc Extracting [==================================================>] 4.333MB/4.333MB e35e8e85e24d Verifying Checksum e35e8e85e24d Download complete 6ca01427385e Downloading [==========================================> ] 51.9MB/61.48MB 44779101e748 Pull complete a721db3e3f3d Extracting [> ] 65.54kB/5.526MB 33e0a01314cc Pull complete f8b444c6ff40 Extracting [==================================> ] 32.77kB/47.97kB f8b444c6ff40 Extracting [==================================================>] 47.97kB/47.97kB d0bef95bc6b2 Downloading [============> ] 3.01kB/11.92kB d0bef95bc6b2 Downloading [==================================================>] 11.92kB/11.92kB d0bef95bc6b2 Verifying Checksum d0bef95bc6b2 Download complete 806be17e856d Verifying Checksum 806be17e856d Download complete af860903a445 Downloading [==================================================>] 1.226kB/1.226kB af860903a445 Verifying Checksum af860903a445 Download complete ea63b2e6315f Pull complete fbd390d3bd00 Extracting [==================================================>] 2.675kB/2.675kB fbd390d3bd00 Extracting [==================================================>] 2.675kB/2.675kB e8bf24a82546 Extracting [====================================> ] 131.5MB/180.3MB 6ca01427385e Downloading [=================================================> ] 61.09MB/61.48MB 6ca01427385e Verifying Checksum 6ca01427385e Download complete a721db3e3f3d Extracting [====> ] 458.8kB/5.526MB f8b444c6ff40 Pull complete e6c38e6d3add Extracting [==================================================>] 23.82kB/23.82kB e6c38e6d3add Extracting [==================================================>] 23.82kB/23.82kB e8bf24a82546 Extracting [=====================================> ] 133.7MB/180.3MB 22ebf0e44c85 Downloading [> ] 376.8kB/37.02MB 22ebf0e44c85 Downloading [> ] 376.8kB/37.02MB 00b33c871d26 Downloading [> ] 535.8kB/253.3MB 00b33c871d26 Downloading [> ] 535.8kB/253.3MB fbd390d3bd00 Pull complete a721db3e3f3d Extracting [======================================> ] 4.26MB/5.526MB 9b1ac15ef728 Extracting [==================================================>] 3.087kB/3.087kB 9b1ac15ef728 Extracting [==================================================>] 3.087kB/3.087kB 6b11e56702ad Downloading [> ] 77.48kB/7.707MB 6b11e56702ad Downloading [> ] 77.48kB/7.707MB e6c38e6d3add Pull complete 22ebf0e44c85 Downloading [====================> ] 15.08MB/37.02MB 22ebf0e44c85 Downloading [====================> ] 15.08MB/37.02MB e8bf24a82546 Extracting [======================================> ] 138.1MB/180.3MB 00b33c871d26 Downloading [==> ] 10.21MB/253.3MB 00b33c871d26 Downloading [==> ] 10.21MB/253.3MB a721db3e3f3d Extracting [==========================================> ] 4.653MB/5.526MB 9b1ac15ef728 Pull complete 6b11e56702ad Verifying Checksum 6b11e56702ad Verifying Checksum 6b11e56702ad Download complete 6b11e56702ad Download complete 8682f304eb80 Extracting [==================================================>] 4.023kB/4.023kB 8682f304eb80 Extracting [==================================================>] 4.023kB/4.023kB 22ebf0e44c85 Downloading [=========================================> ] 30.93MB/37.02MB 22ebf0e44c85 Downloading [=========================================> ] 30.93MB/37.02MB 6ca01427385e Extracting [> ] 557.1kB/61.48MB 00b33c871d26 Downloading [=====> ] 27.42MB/253.3MB 00b33c871d26 Downloading [=====> ] 27.42MB/253.3MB e8bf24a82546 Extracting [=======================================> ] 142MB/180.3MB a721db3e3f3d Extracting [===========================================> ] 4.85MB/5.526MB 22ebf0e44c85 Verifying Checksum 22ebf0e44c85 Verifying Checksum 22ebf0e44c85 Download complete 22ebf0e44c85 Download complete a721db3e3f3d Extracting [==================================================>] 5.526MB/5.526MB 53d69aa7d3fc Downloading [=> ] 718B/19.96kB 53d69aa7d3fc Downloading [=> ] 718B/19.96kB 53d69aa7d3fc Verifying Checksum 53d69aa7d3fc Download complete 53d69aa7d3fc Verifying Checksum 53d69aa7d3fc Download complete 6ca01427385e Extracting [==> ] 3.342MB/61.48MB 00b33c871d26 Downloading [=======> ] 40.39MB/253.3MB 00b33c871d26 Downloading [=======> ] 40.39MB/253.3MB e8bf24a82546 Extracting [========================================> ] 144.8MB/180.3MB 22ebf0e44c85 Extracting [> ] 393.2kB/37.02MB 22ebf0e44c85 Extracting [> ] 393.2kB/37.02MB 00b33c871d26 Downloading [==========> ] 54.91MB/253.3MB 00b33c871d26 Downloading [==========> ] 54.91MB/253.3MB 91ef9543149d Downloading [================================> ] 719B/1.101kB 91ef9543149d Downloading [================================> ] 719B/1.101kB 91ef9543149d Verifying Checksum 91ef9543149d Download complete 91ef9543149d Downloading [==================================================>] 1.101kB/1.101kB 91ef9543149d Verifying Checksum 91ef9543149d Download complete 8682f304eb80 Pull complete 6ca01427385e Extracting [=====> ] 6.685MB/61.48MB 5fbafe078afc Extracting [==================================================>] 1.44kB/1.44kB 5fbafe078afc Extracting [==================================================>] 1.44kB/1.44kB a721db3e3f3d Pull complete 1850a929b84a Extracting [==================================================>] 149B/149B 1850a929b84a Extracting [==================================================>] 149B/149B e8bf24a82546 Extracting [========================================> ] 147.1MB/180.3MB a3ab11953ef9 Downloading [> ] 399.7kB/39.52MB a3ab11953ef9 Downloading [> ] 399.7kB/39.52MB 22ebf0e44c85 Extracting [====> ] 3.146MB/37.02MB 22ebf0e44c85 Extracting [====> ] 3.146MB/37.02MB 00b33c871d26 Downloading [==============> ] 71.58MB/253.3MB 00b33c871d26 Downloading [==============> ] 71.58MB/253.3MB 6ca01427385e Extracting [=======> ] 9.47MB/61.48MB e8bf24a82546 Extracting [=========================================> ] 150.4MB/180.3MB 1850a929b84a Pull complete 397a918c7da3 Extracting [==================================================>] 327B/327B a3ab11953ef9 Downloading [==============> ] 11.79MB/39.52MB a3ab11953ef9 Downloading [==============> ] 11.79MB/39.52MB 397a918c7da3 Extracting [==================================================>] 327B/327B 2ec4f59af178 Downloading [========================================> ] 721B/881B 2ec4f59af178 Downloading [========================================> ] 721B/881B 2ec4f59af178 Downloading [==================================================>] 881B/881B 2ec4f59af178 Downloading [==================================================>] 881B/881B 2ec4f59af178 Verifying Checksum 2ec4f59af178 Download complete 2ec4f59af178 Download complete 5fbafe078afc Pull complete 7fb53fd2ae10 Extracting [===========> ] 32.77kB/138kB 7fb53fd2ae10 Extracting [==================================================>] 138kB/138kB 7fb53fd2ae10 Extracting [==================================================>] 138kB/138kB 22ebf0e44c85 Extracting [=======> ] 5.898MB/37.02MB 22ebf0e44c85 Extracting [=======> ] 5.898MB/37.02MB 00b33c871d26 Downloading [=================> ] 87.17MB/253.3MB 00b33c871d26 Downloading [=================> ] 87.17MB/253.3MB e8bf24a82546 Extracting [==========================================> ] 152.6MB/180.3MB a3ab11953ef9 Downloading [================================> ] 26.05MB/39.52MB a3ab11953ef9 Downloading [================================> ] 26.05MB/39.52MB 6ca01427385e Extracting [=========> ] 11.7MB/61.48MB 8b7e81cd5ef1 Downloading [==================================================>] 131B/131B 8b7e81cd5ef1 Verifying Checksum 8b7e81cd5ef1 Verifying Checksum 8b7e81cd5ef1 Download complete 8b7e81cd5ef1 Download complete 00b33c871d26 Downloading [====================> ] 103.8MB/253.3MB 00b33c871d26 Downloading [====================> ] 103.8MB/253.3MB 7fb53fd2ae10 Pull complete 397a918c7da3 Pull complete 592798bd3683 Extracting [==================================================>] 100B/100B 592798bd3683 Extracting [==================================================>] 100B/100B 22ebf0e44c85 Extracting [===========> ] 8.651MB/37.02MB 22ebf0e44c85 Extracting [===========> ] 8.651MB/37.02MB a3ab11953ef9 Verifying Checksum a3ab11953ef9 Verifying Checksum a3ab11953ef9 Download complete a3ab11953ef9 Download complete e8bf24a82546 Extracting [===========================================> ] 156MB/180.3MB 6ca01427385e Extracting [==========> ] 13.37MB/61.48MB c52916c1316e Downloading [==================================================>] 171B/171B c52916c1316e Downloading [==================================================>] 171B/171B c52916c1316e Verifying Checksum c52916c1316e Verifying Checksum c52916c1316e Download complete c52916c1316e Download complete 00b33c871d26 Downloading [=======================> ] 117.3MB/253.3MB 00b33c871d26 Downloading [=======================> ] 117.3MB/253.3MB 22ebf0e44c85 Extracting [===============> ] 11.4MB/37.02MB 22ebf0e44c85 Extracting [===============> ] 11.4MB/37.02MB e8bf24a82546 Extracting [============================================> ] 160.4MB/180.3MB 6ca01427385e Extracting [================> ] 20.61MB/61.48MB 806be17e856d Extracting [> ] 557.1kB/89.72MB 00b33c871d26 Downloading [=======================> ] 120.5MB/253.3MB 00b33c871d26 Downloading [=======================> ] 120.5MB/253.3MB 7a1cb9ad7f75 Downloading [> ] 527.6kB/115.2MB 592798bd3683 Pull complete 0a92c7dea7af Downloading [==========> ] 720B/3.449kB 0a92c7dea7af Downloading [==================================================>] 3.449kB/3.449kB 0a92c7dea7af Verifying Checksum 0a92c7dea7af Download complete 473fdc983780 Extracting [==================================================>] 721B/721B e8bf24a82546 Extracting [============================================> ] 161MB/180.3MB 473fdc983780 Extracting [==================================================>] 721B/721B 6ca01427385e Extracting [=================> ] 21.73MB/61.48MB 22ebf0e44c85 Extracting [================> ] 12.58MB/37.02MB 22ebf0e44c85 Extracting [================> ] 12.58MB/37.02MB 806be17e856d Extracting [> ] 1.671MB/89.72MB 00b33c871d26 Downloading [=========================> ] 131.2MB/253.3MB 00b33c871d26 Downloading [=========================> ] 131.2MB/253.3MB 22ebf0e44c85 Extracting [====================> ] 15.34MB/37.02MB 22ebf0e44c85 Extracting [====================> ] 15.34MB/37.02MB e8bf24a82546 Extracting [=============================================> ] 163.2MB/180.3MB 6ca01427385e Extracting [====================> ] 25.07MB/61.48MB 7a1cb9ad7f75 Downloading [====> ] 9.644MB/115.2MB 473fdc983780 Pull complete 806be17e856d Extracting [==> ] 4.456MB/89.72MB 00b33c871d26 Downloading [===========================> ] 137.7MB/253.3MB 00b33c871d26 Downloading [===========================> ] 137.7MB/253.3MB prometheus Pulled 22ebf0e44c85 Extracting [=========================> ] 18.87MB/37.02MB 22ebf0e44c85 Extracting [=========================> ] 18.87MB/37.02MB 6ca01427385e Extracting [=====================> ] 26.74MB/61.48MB 7a1cb9ad7f75 Downloading [========> ] 19.3MB/115.2MB e8bf24a82546 Extracting [=============================================> ] 164.9MB/180.3MB 00b33c871d26 Downloading [============================> ] 145.2MB/253.3MB 00b33c871d26 Downloading [============================> ] 145.2MB/253.3MB 806be17e856d Extracting [====> ] 7.799MB/89.72MB 22ebf0e44c85 Extracting [===============================> ] 23.59MB/37.02MB 22ebf0e44c85 Extracting [===============================> ] 23.59MB/37.02MB 6ca01427385e Extracting [=========================> ] 31.2MB/61.48MB 7a1cb9ad7f75 Downloading [=============> ] 32.18MB/115.2MB e8bf24a82546 Extracting [===============================================> ] 169.9MB/180.3MB d93f69e96600 Downloading [> ] 535.8kB/115.2MB 00b33c871d26 Downloading [==============================> ] 154.3MB/253.3MB 00b33c871d26 Downloading [==============================> ] 154.3MB/253.3MB 806be17e856d Extracting [=====> ] 10.58MB/89.72MB 22ebf0e44c85 Extracting [=====================================> ] 27.53MB/37.02MB 22ebf0e44c85 Extracting [=====================================> ] 27.53MB/37.02MB 7a1cb9ad7f75 Downloading [===================> ] 43.97MB/115.2MB 6ca01427385e Extracting [===========================> ] 33.98MB/61.48MB e8bf24a82546 Extracting [===============================================> ] 172.1MB/180.3MB d93f69e96600 Downloading [=====> ] 13.43MB/115.2MB 00b33c871d26 Downloading [================================> ] 165MB/253.3MB 00b33c871d26 Downloading [================================> ] 165MB/253.3MB 806be17e856d Extracting [=======> ] 13.37MB/89.72MB 22ebf0e44c85 Extracting [===========================================> ] 32.24MB/37.02MB 22ebf0e44c85 Extracting [===========================================> ] 32.24MB/37.02MB 7a1cb9ad7f75 Downloading [=======================> ] 54.14MB/115.2MB 6ca01427385e Extracting [==============================> ] 37.32MB/61.48MB e8bf24a82546 Extracting [================================================> ] 173.8MB/180.3MB d93f69e96600 Downloading [===========> ] 25.83MB/115.2MB 00b33c871d26 Downloading [==================================> ] 173.1MB/253.3MB 00b33c871d26 Downloading [==================================> ] 173.1MB/253.3MB 806be17e856d Extracting [========> ] 15.6MB/89.72MB 7a1cb9ad7f75 Downloading [==========================> ] 61.65MB/115.2MB 22ebf0e44c85 Extracting [==============================================> ] 34.21MB/37.02MB 22ebf0e44c85 Extracting [==============================================> ] 34.21MB/37.02MB 6ca01427385e Extracting [================================> ] 39.55MB/61.48MB e8bf24a82546 Extracting [================================================> ] 176MB/180.3MB d93f69e96600 Downloading [===============> ] 35.49MB/115.2MB 00b33c871d26 Downloading [====================================> ] 185.4MB/253.3MB 00b33c871d26 Downloading [====================================> ] 185.4MB/253.3MB 806be17e856d Extracting [==========> ] 18.94MB/89.72MB 7a1cb9ad7f75 Downloading [===============================> ] 72.37MB/115.2MB 22ebf0e44c85 Extracting [================================================> ] 36.18MB/37.02MB 22ebf0e44c85 Extracting [================================================> ] 36.18MB/37.02MB 6ca01427385e Extracting [==================================> ] 42.34MB/61.48MB 22ebf0e44c85 Extracting [==================================================>] 37.02MB/37.02MB 22ebf0e44c85 Extracting [==================================================>] 37.02MB/37.02MB d93f69e96600 Downloading [=====================> ] 49.46MB/115.2MB e8bf24a82546 Extracting [=================================================> ] 177.7MB/180.3MB 7a1cb9ad7f75 Downloading [===================================> ] 82.51MB/115.2MB 00b33c871d26 Downloading [======================================> ] 197.2MB/253.3MB 00b33c871d26 Downloading [======================================> ] 197.2MB/253.3MB 806be17e856d Extracting [============> ] 22.28MB/89.72MB 6ca01427385e Extracting [===================================> ] 44.01MB/61.48MB d93f69e96600 Downloading [==========================> ] 61.84MB/115.2MB 7a1cb9ad7f75 Downloading [========================================> ] 94.35MB/115.2MB e8bf24a82546 Extracting [=================================================> ] 179.4MB/180.3MB 00b33c871d26 Downloading [=========================================> ] 210.1MB/253.3MB 00b33c871d26 Downloading [=========================================> ] 210.1MB/253.3MB 6ca01427385e Extracting [======================================> ] 47.35MB/61.48MB d93f69e96600 Downloading [=================================> ] 76.88MB/115.2MB 806be17e856d Extracting [==============> ] 25.62MB/89.72MB e8bf24a82546 Extracting [==================================================>] 180.3MB/180.3MB 00b33c871d26 Downloading [===========================================> ] 221.9MB/253.3MB 00b33c871d26 Downloading [===========================================> ] 221.9MB/253.3MB 7a1cb9ad7f75 Downloading [=============================================> ] 104MB/115.2MB d93f69e96600 Downloading [===================================> ] 82.24MB/115.2MB 6ca01427385e Extracting [========================================> ] 49.58MB/61.48MB 806be17e856d Extracting [==============> ] 26.74MB/89.72MB 7a1cb9ad7f75 Verifying Checksum 7a1cb9ad7f75 Download complete 00b33c871d26 Downloading [==============================================> ] 235.9MB/253.3MB 00b33c871d26 Downloading [==============================================> ] 235.9MB/253.3MB 6ca01427385e Extracting [===========================================> ] 52.92MB/61.48MB 806be17e856d Extracting [================> ] 28.97MB/89.72MB d93f69e96600 Downloading [=======================================> ] 91.9MB/115.2MB 22ebf0e44c85 Pull complete 22ebf0e44c85 Pull complete bbb9d15c45a1 Downloading [=========> ] 719B/3.633kB bbb9d15c45a1 Downloading [==================================================>] 3.633kB/3.633kB bbb9d15c45a1 Verifying Checksum bbb9d15c45a1 Download complete 00b33c871d26 Downloading [================================================> ] 245MB/253.3MB 00b33c871d26 Downloading [================================================> ] 245MB/253.3MB d93f69e96600 Downloading [=========================================> ] 96.73MB/115.2MB 806be17e856d Extracting [=================> ] 30.64MB/89.72MB 6ca01427385e Extracting [=============================================> ] 55.71MB/61.48MB e8bf24a82546 Pull complete 00b33c871d26 Verifying Checksum 00b33c871d26 Download complete 00b33c871d26 Verifying Checksum 00b33c871d26 Download complete d93f69e96600 Downloading [==============================================> ] 106.4MB/115.2MB 806be17e856d Extracting [==================> ] 33.42MB/89.72MB 6ca01427385e Extracting [================================================> ] 59.05MB/61.48MB 00b33c871d26 Extracting [> ] 557.1kB/253.3MB 00b33c871d26 Extracting [> ] 557.1kB/253.3MB d93f69e96600 Verifying Checksum d93f69e96600 Download complete 806be17e856d Extracting [====================> ] 36.21MB/89.72MB 6ca01427385e Extracting [=================================================> ] 60.72MB/61.48MB 00b33c871d26 Extracting [==> ] 11.7MB/253.3MB 00b33c871d26 Extracting [==> ] 11.7MB/253.3MB 6ca01427385e Extracting [==================================================>] 61.48MB/61.48MB 806be17e856d Extracting [======================> ] 39.55MB/89.72MB 00b33c871d26 Extracting [====> ] 22.28MB/253.3MB 00b33c871d26 Extracting [====> ] 22.28MB/253.3MB 154b803e2d93 Extracting [===================> ] 32.77kB/84.13kB 154b803e2d93 Extracting [==================================================>] 84.13kB/84.13kB 154b803e2d93 Extracting [==================================================>] 84.13kB/84.13kB 806be17e856d Extracting [=======================> ] 42.34MB/89.72MB 00b33c871d26 Extracting [=====> ] 28.41MB/253.3MB 00b33c871d26 Extracting [=====> ] 28.41MB/253.3MB 806be17e856d Extracting [=======================> ] 42.89MB/89.72MB 00b33c871d26 Extracting [======> ] 30.64MB/253.3MB 00b33c871d26 Extracting [======> ] 30.64MB/253.3MB 806be17e856d Extracting [=========================> ] 45.68MB/89.72MB 00b33c871d26 Extracting [=========> ] 46.79MB/253.3MB 00b33c871d26 Extracting [=========> ] 46.79MB/253.3MB 806be17e856d Extracting [============================> ] 51.81MB/89.72MB 00b33c871d26 Extracting [===========> ] 57.38MB/253.3MB 00b33c871d26 Extracting [===========> ] 57.38MB/253.3MB 806be17e856d Extracting [===============================> ] 57.38MB/89.72MB 00b33c871d26 Extracting [=============> ] 70.75MB/253.3MB 00b33c871d26 Extracting [=============> ] 70.75MB/253.3MB 806be17e856d Extracting [==================================> ] 61.83MB/89.72MB 00b33c871d26 Extracting [================> ] 83MB/253.3MB 00b33c871d26 Extracting [================> ] 83MB/253.3MB 806be17e856d Extracting [=====================================> ] 67.4MB/89.72MB 00b33c871d26 Extracting [==================> ] 93.59MB/253.3MB 00b33c871d26 Extracting [==================> ] 93.59MB/253.3MB 806be17e856d Extracting [=======================================> ] 70.75MB/89.72MB 00b33c871d26 Extracting [====================> ] 102.5MB/253.3MB 00b33c871d26 Extracting [====================> ] 102.5MB/253.3MB 00b33c871d26 Extracting [=====================> ] 106.4MB/253.3MB 00b33c871d26 Extracting [=====================> ] 106.4MB/253.3MB 00b33c871d26 Extracting [=====================> ] 107MB/253.3MB 00b33c871d26 Extracting [=====================> ] 107MB/253.3MB 806be17e856d Extracting [========================================> ] 72.97MB/89.72MB 00b33c871d26 Extracting [======================> ] 112MB/253.3MB 00b33c871d26 Extracting [======================> ] 112MB/253.3MB 806be17e856d Extracting [==========================================> ] 75.76MB/89.72MB 6ca01427385e Pull complete 154b803e2d93 Pull complete 00b33c871d26 Extracting [======================> ] 115.9MB/253.3MB 00b33c871d26 Extracting [======================> ] 115.9MB/253.3MB 806be17e856d Extracting [============================================> ] 79.66MB/89.72MB 00b33c871d26 Extracting [=======================> ] 120.9MB/253.3MB 00b33c871d26 Extracting [=======================> ] 120.9MB/253.3MB 806be17e856d Extracting [==============================================> ] 83MB/89.72MB 00b33c871d26 Extracting [========================> ] 125.9MB/253.3MB 00b33c871d26 Extracting [========================> ] 125.9MB/253.3MB e4305231c991 Extracting [==================================================>] 92B/92B e4305231c991 Extracting [==================================================>] 92B/92B 00b33c871d26 Extracting [=========================> ] 127.6MB/253.3MB 00b33c871d26 Extracting [=========================> ] 127.6MB/253.3MB 00b33c871d26 Extracting [=========================> ] 130.9MB/253.3MB 00b33c871d26 Extracting [=========================> ] 130.9MB/253.3MB 806be17e856d Extracting [===============================================> ] 85.23MB/89.72MB e35e8e85e24d Extracting [> ] 524.3kB/50.55MB 00b33c871d26 Extracting [==========================> ] 134.8MB/253.3MB 00b33c871d26 Extracting [==========================> ] 134.8MB/253.3MB 806be17e856d Extracting [================================================> ] 86.9MB/89.72MB e35e8e85e24d Extracting [=> ] 1.049MB/50.55MB 00b33c871d26 Extracting [===========================> ] 139.8MB/253.3MB 00b33c871d26 Extracting [===========================> ] 139.8MB/253.3MB 00b33c871d26 Extracting [============================> ] 144.3MB/253.3MB 00b33c871d26 Extracting [============================> ] 144.3MB/253.3MB 00b33c871d26 Extracting [=============================> ] 149.8MB/253.3MB 00b33c871d26 Extracting [=============================> ] 149.8MB/253.3MB 00b33c871d26 Extracting [==============================> ] 154.9MB/253.3MB 00b33c871d26 Extracting [==============================> ] 154.9MB/253.3MB 00b33c871d26 Extracting [===============================> ] 159.9MB/253.3MB 00b33c871d26 Extracting [===============================> ] 159.9MB/253.3MB 806be17e856d Extracting [=================================================> ] 88.01MB/89.72MB e35e8e85e24d Extracting [=> ] 1.573MB/50.55MB 00b33c871d26 Extracting [================================> ] 163.8MB/253.3MB 00b33c871d26 Extracting [================================> ] 163.8MB/253.3MB e4305231c991 Pull complete 00b33c871d26 Extracting [================================> ] 166.6MB/253.3MB 00b33c871d26 Extracting [================================> ] 166.6MB/253.3MB f469048fbe8d Extracting [==================================================>] 92B/92B f469048fbe8d Extracting [==================================================>] 92B/92B e35e8e85e24d Extracting [==> ] 2.097MB/50.55MB 806be17e856d Extracting [=================================================> ] 89.13MB/89.72MB 00b33c871d26 Extracting [=================================> ] 168.8MB/253.3MB 00b33c871d26 Extracting [=================================> ] 168.8MB/253.3MB 806be17e856d Extracting [==================================================>] 89.72MB/89.72MB 00b33c871d26 Extracting [=================================> ] 169.9MB/253.3MB 00b33c871d26 Extracting [=================================> ] 169.9MB/253.3MB e35e8e85e24d Extracting [===> ] 3.67MB/50.55MB e35e8e85e24d Extracting [====> ] 4.194MB/50.55MB f469048fbe8d Pull complete 00b33c871d26 Extracting [=================================> ] 171.6MB/253.3MB 00b33c871d26 Extracting [=================================> ] 171.6MB/253.3MB e35e8e85e24d Extracting [====> ] 4.719MB/50.55MB e35e8e85e24d Extracting [=======> ] 7.34MB/50.55MB 00b33c871d26 Extracting [==================================> ] 173.2MB/253.3MB 00b33c871d26 Extracting [==================================> ] 173.2MB/253.3MB 00b33c871d26 Extracting [==================================> ] 174.4MB/253.3MB 00b33c871d26 Extracting [==================================> ] 174.4MB/253.3MB e35e8e85e24d Extracting [========> ] 8.913MB/50.55MB 00b33c871d26 Extracting [==================================> ] 176MB/253.3MB 00b33c871d26 Extracting [==================================> ] 176MB/253.3MB e35e8e85e24d Extracting [==========> ] 10.49MB/50.55MB 806be17e856d Pull complete c189e028fabb Extracting [==================================================>] 300B/300B c189e028fabb Extracting [==================================================>] 300B/300B 00b33c871d26 Extracting [==================================> ] 177.1MB/253.3MB 00b33c871d26 Extracting [==================================> ] 177.1MB/253.3MB 00b33c871d26 Extracting [===================================> ] 178.8MB/253.3MB 00b33c871d26 Extracting [===================================> ] 178.8MB/253.3MB e35e8e85e24d Extracting [==========> ] 11.01MB/50.55MB 00b33c871d26 Extracting [====================================> ] 183.3MB/253.3MB 00b33c871d26 Extracting [====================================> ] 183.3MB/253.3MB e35e8e85e24d Extracting [============> ] 13.11MB/50.55MB 00b33c871d26 Extracting [====================================> ] 186.6MB/253.3MB 00b33c871d26 Extracting [====================================> ] 186.6MB/253.3MB e35e8e85e24d Extracting [=================> ] 17.83MB/50.55MB 00b33c871d26 Extracting [=====================================> ] 188.3MB/253.3MB 00b33c871d26 Extracting [=====================================> ] 188.3MB/253.3MB 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB e35e8e85e24d Extracting [====================> ] 20.97MB/50.55MB 00b33c871d26 Extracting [=====================================> ] 190.5MB/253.3MB 00b33c871d26 Extracting [=====================================> ] 190.5MB/253.3MB e35e8e85e24d Extracting [========================> ] 24.64MB/50.55MB 00b33c871d26 Extracting [======================================> ] 192.7MB/253.3MB 00b33c871d26 Extracting [======================================> ] 192.7MB/253.3MB e35e8e85e24d Extracting [==========================> ] 26.74MB/50.55MB 00b33c871d26 Extracting [======================================> ] 195.5MB/253.3MB 00b33c871d26 Extracting [======================================> ] 195.5MB/253.3MB e35e8e85e24d Extracting [=============================> ] 29.36MB/50.55MB 00b33c871d26 Extracting [=======================================> ] 198.3MB/253.3MB 00b33c871d26 Extracting [=======================================> ] 198.3MB/253.3MB e35e8e85e24d Extracting [===============================> ] 31.98MB/50.55MB e35e8e85e24d Extracting [=================================> ] 34.08MB/50.55MB 00b33c871d26 Extracting [=======================================> ] 200MB/253.3MB 00b33c871d26 Extracting [=======================================> ] 200MB/253.3MB e35e8e85e24d Extracting [=====================================> ] 37.75MB/50.55MB 00b33c871d26 Extracting [========================================> ] 202.8MB/253.3MB 00b33c871d26 Extracting [========================================> ] 202.8MB/253.3MB e35e8e85e24d Extracting [========================================> ] 40.89MB/50.55MB 00b33c871d26 Extracting [========================================> ] 205MB/253.3MB 00b33c871d26 Extracting [========================================> ] 205MB/253.3MB e35e8e85e24d Extracting [===========================================> ] 44.04MB/50.55MB 00b33c871d26 Extracting [========================================> ] 207.2MB/253.3MB 00b33c871d26 Extracting [========================================> ] 207.2MB/253.3MB 00b33c871d26 Extracting [=========================================> ] 211.1MB/253.3MB 00b33c871d26 Extracting [=========================================> ] 211.1MB/253.3MB e35e8e85e24d Extracting [================================================> ] 48.76MB/50.55MB 00b33c871d26 Extracting [==========================================> ] 212.8MB/253.3MB 00b33c871d26 Extracting [==========================================> ] 212.8MB/253.3MB 00b33c871d26 Extracting [==========================================> ] 214.5MB/253.3MB e35e8e85e24d Extracting [=================================================> ] 50.33MB/50.55MB 00b33c871d26 Extracting [==========================================> ] 214.5MB/253.3MB e35e8e85e24d Extracting [==================================================>] 50.55MB/50.55MB 00b33c871d26 Extracting [==========================================> ] 215.6MB/253.3MB 00b33c871d26 Extracting [==========================================> ] 215.6MB/253.3MB 00b33c871d26 Extracting [==========================================> ] 216.1MB/253.3MB 00b33c871d26 Extracting [==========================================> ] 216.1MB/253.3MB 00b33c871d26 Extracting [===========================================> ] 218.4MB/253.3MB 00b33c871d26 Extracting [===========================================> ] 218.4MB/253.3MB c189e028fabb Pull complete 00b33c871d26 Extracting [===========================================> ] 220MB/253.3MB 00b33c871d26 Extracting [===========================================> ] 220MB/253.3MB 634de6c90876 Pull complete 00b33c871d26 Extracting [===========================================> ] 220.6MB/253.3MB 00b33c871d26 Extracting [===========================================> ] 220.6MB/253.3MB 00b33c871d26 Extracting [============================================> ] 223.9MB/253.3MB 00b33c871d26 Extracting [============================================> ] 223.9MB/253.3MB 00b33c871d26 Extracting [=============================================> ] 229MB/253.3MB 00b33c871d26 Extracting [=============================================> ] 229MB/253.3MB 00b33c871d26 Extracting [=============================================> ] 231.2MB/253.3MB 00b33c871d26 Extracting [=============================================> ] 231.2MB/253.3MB 00b33c871d26 Extracting [=============================================> ] 232.8MB/253.3MB 00b33c871d26 Extracting [=============================================> ] 232.8MB/253.3MB 00b33c871d26 Extracting [==============================================> ] 237.9MB/253.3MB 00b33c871d26 Extracting [==============================================> ] 237.9MB/253.3MB cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB e35e8e85e24d Pull complete 00b33c871d26 Extracting [===============================================> ] 241.2MB/253.3MB 00b33c871d26 Extracting [===============================================> ] 241.2MB/253.3MB c9bd119720e4 Extracting [> ] 557.1kB/246.3MB 00b33c871d26 Extracting [=================================================> ] 249.6MB/253.3MB 00b33c871d26 Extracting [=================================================> ] 249.6MB/253.3MB c9bd119720e4 Extracting [> ] 2.785MB/246.3MB 00b33c871d26 Extracting [==================================================>] 253.3MB/253.3MB 00b33c871d26 Extracting [==================================================>] 253.3MB/253.3MB c9bd119720e4 Extracting [===> ] 15.04MB/246.3MB d0bef95bc6b2 Extracting [==================================================>] 11.92kB/11.92kB d0bef95bc6b2 Extracting [==================================================>] 11.92kB/11.92kB c9bd119720e4 Extracting [====> ] 21.17MB/246.3MB c9bd119720e4 Extracting [====> ] 23.4MB/246.3MB c9bd119720e4 Extracting [======> ] 32.31MB/246.3MB c9bd119720e4 Extracting [=========> ] 46.79MB/246.3MB c9bd119720e4 Extracting [===========> ] 59.05MB/246.3MB c9bd119720e4 Extracting [==============> ] 72.42MB/246.3MB c9bd119720e4 Extracting [================> ] 81.33MB/246.3MB c9bd119720e4 Extracting [==================> ] 92.47MB/246.3MB c9bd119720e4 Extracting [======================> ] 108.6MB/246.3MB c9bd119720e4 Extracting [=========================> ] 123.7MB/246.3MB c9bd119720e4 Extracting [===========================> ] 135.9MB/246.3MB c9bd119720e4 Extracting [============================> ] 138.7MB/246.3MB c9bd119720e4 Extracting [==============================> ] 152.1MB/246.3MB c9bd119720e4 Extracting [=================================> ] 167.1MB/246.3MB c9bd119720e4 Extracting [====================================> ] 179.4MB/246.3MB c9bd119720e4 Extracting [=======================================> ] 193.9MB/246.3MB c9bd119720e4 Extracting [==========================================> ] 210MB/246.3MB c9bd119720e4 Extracting [=============================================> ] 223.4MB/246.3MB c9bd119720e4 Extracting [================================================> ] 240.6MB/246.3MB c9bd119720e4 Extracting [==================================================>] 246.3MB/246.3MB cd00854cfb1a Pull complete 00b33c871d26 Pull complete 00b33c871d26 Pull complete d0bef95bc6b2 Pull complete 6b11e56702ad Extracting [> ] 98.3kB/7.707MB 6b11e56702ad Extracting [> ] 98.3kB/7.707MB c9bd119720e4 Pull complete af860903a445 Extracting [==================================================>] 1.226kB/1.226kB af860903a445 Extracting [==================================================>] 1.226kB/1.226kB mariadb Pulled 6b11e56702ad Extracting [===============================> ] 4.915MB/7.707MB 6b11e56702ad Extracting [===============================> ] 4.915MB/7.707MB 6b11e56702ad Extracting [==================================================>] 7.707MB/7.707MB 6b11e56702ad Extracting [==================================================>] 7.707MB/7.707MB apex-pdp Pulled af860903a445 Pull complete 6b11e56702ad Pull complete 6b11e56702ad Pull complete 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB grafana Pulled 53d69aa7d3fc Pull complete 53d69aa7d3fc Pull complete a3ab11953ef9 Extracting [> ] 426kB/39.52MB a3ab11953ef9 Extracting [> ] 426kB/39.52MB a3ab11953ef9 Extracting [=================> ] 13.63MB/39.52MB a3ab11953ef9 Extracting [=================> ] 13.63MB/39.52MB a3ab11953ef9 Extracting [======================================> ] 30.24MB/39.52MB a3ab11953ef9 Extracting [======================================> ] 30.24MB/39.52MB a3ab11953ef9 Extracting [==================================================>] 39.52MB/39.52MB a3ab11953ef9 Extracting [==================================================>] 39.52MB/39.52MB a3ab11953ef9 Pull complete a3ab11953ef9 Pull complete 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 91ef9543149d Pull complete 91ef9543149d Pull complete 2ec4f59af178 Extracting [==================================================>] 881B/881B 2ec4f59af178 Extracting [==================================================>] 881B/881B 2ec4f59af178 Extracting [==================================================>] 881B/881B 2ec4f59af178 Extracting [==================================================>] 881B/881B 2ec4f59af178 Pull complete 2ec4f59af178 Pull complete 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 8b7e81cd5ef1 Pull complete 8b7e81cd5ef1 Pull complete c52916c1316e Extracting [==================================================>] 171B/171B c52916c1316e Extracting [==================================================>] 171B/171B c52916c1316e Extracting [==================================================>] 171B/171B c52916c1316e Extracting [==================================================>] 171B/171B c52916c1316e Pull complete c52916c1316e Pull complete d93f69e96600 Extracting [> ] 557.1kB/115.2MB 7a1cb9ad7f75 Extracting [> ] 557.1kB/115.2MB d93f69e96600 Extracting [=====> ] 13.37MB/115.2MB 7a1cb9ad7f75 Extracting [=====> ] 12.81MB/115.2MB d93f69e96600 Extracting [===========> ] 26.74MB/115.2MB 7a1cb9ad7f75 Extracting [==========> ] 24.51MB/115.2MB d93f69e96600 Extracting [==================> ] 43.45MB/115.2MB 7a1cb9ad7f75 Extracting [================> ] 37.32MB/115.2MB d93f69e96600 Extracting [========================> ] 57.38MB/115.2MB 7a1cb9ad7f75 Extracting [======================> ] 50.69MB/115.2MB d93f69e96600 Extracting [==============================> ] 70.75MB/115.2MB 7a1cb9ad7f75 Extracting [=============================> ] 67.4MB/115.2MB d93f69e96600 Extracting [======================================> ] 88.57MB/115.2MB 7a1cb9ad7f75 Extracting [====================================> ] 84.12MB/115.2MB d93f69e96600 Extracting [==========================================> ] 98.6MB/115.2MB 7a1cb9ad7f75 Extracting [===========================================> ] 99.71MB/115.2MB d93f69e96600 Extracting [================================================> ] 110.9MB/115.2MB 7a1cb9ad7f75 Extracting [================================================> ] 110.9MB/115.2MB 7a1cb9ad7f75 Extracting [==================================================>] 115.2MB/115.2MB d93f69e96600 Extracting [==================================================>] 115.2MB/115.2MB 7a1cb9ad7f75 Pull complete d93f69e96600 Pull complete bbb9d15c45a1 Extracting [==================================================>] 3.633kB/3.633kB bbb9d15c45a1 Extracting [==================================================>] 3.633kB/3.633kB 0a92c7dea7af Extracting [==================================================>] 3.449kB/3.449kB 0a92c7dea7af Extracting [==================================================>] 3.449kB/3.449kB bbb9d15c45a1 Pull complete 0a92c7dea7af Pull complete kafka Pulled zookeeper Pulled Network compose_default Creating Network compose_default Created Container zookeeper Creating Container simulator Creating Container mariadb Creating Container prometheus Creating Container zookeeper Created Container kafka Creating Container prometheus Created Container grafana Creating Container mariadb Created Container policy-db-migrator Creating Container simulator Created Container policy-db-migrator Created Container policy-api Creating Container grafana Created Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-apex-pdp Creating Container policy-apex-pdp Created Container mariadb Starting Container simulator Starting Container prometheus Starting Container zookeeper Starting Container mariadb Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container zookeeper Started Container kafka Starting Container prometheus Started Container grafana Starting Container simulator Started Container policy-api Started Container grafana Started Container kafka Started Container policy-pap Starting Container policy-pap Started Container policy-apex-pdp Starting Container policy-apex-pdp Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 10 seconds policy-api Up 13 seconds kafka Up 11 seconds grafana Up 12 seconds zookeeper Up 16 seconds simulator Up 14 seconds mariadb Up 17 seconds prometheus Up 15 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 15 seconds policy-api Up 18 seconds kafka Up 16 seconds grafana Up 17 seconds zookeeper Up 21 seconds simulator Up 19 seconds mariadb Up 22 seconds prometheus Up 20 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 20 seconds policy-api Up 24 seconds kafka Up 21 seconds grafana Up 22 seconds zookeeper Up 26 seconds simulator Up 24 seconds mariadb Up 27 seconds prometheus Up 25 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 25 seconds policy-api Up 29 seconds kafka Up 26 seconds grafana Up 27 seconds zookeeper Up 31 seconds simulator Up 29 seconds mariadb Up 32 seconds prometheus Up 30 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds policy-api Up 34 seconds kafka Up 31 seconds grafana Up 32 seconds zookeeper Up 36 seconds simulator Up 34 seconds mariadb Up 37 seconds prometheus Up 35 seconds Waiting for REST to come up on localhost port 30001... NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds policy-api Up 34 seconds kafka Up 31 seconds grafana Up 32 seconds zookeeper Up 36 seconds simulator Up 34 seconds mariadb Up 37 seconds prometheus Up 35 seconds Build docker image for robot framework Error: No such image: policy-csit-robot Cloning into '/w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp/csit/resources/tests/models'... Build robot framework docker image Sending build context to Docker daemon 16.49MB Step 1/9 : FROM nexus3.onap.org:10001/library/python:3.10-slim-bullseye 3.10-slim-bullseye: Pulling from library/python 76956b537f14: Pulling fs layer f75f1b8a4051: Pulling fs layer f9adc358e0b8: Pulling fs layer f66e101ef41f: Pulling fs layer b913137adf9e: Pulling fs layer f66e101ef41f: Waiting b913137adf9e: Waiting f75f1b8a4051: Download complete f66e101ef41f: Verifying Checksum f66e101ef41f: Download complete b913137adf9e: Download complete f9adc358e0b8: Download complete 76956b537f14: Verifying Checksum 76956b537f14: Download complete 76956b537f14: Pull complete f75f1b8a4051: Pull complete f9adc358e0b8: Pull complete f66e101ef41f: Pull complete b913137adf9e: Pull complete Digest: sha256:fc8ba6002a477d6536097e9cc529c593cd6621a66c81e601b5353265afd10775 Status: Downloaded newer image for nexus3.onap.org:10001/library/python:3.10-slim-bullseye ---> 08150e0479fc Step 2/9 : ARG CSIT_SCRIPT=${CSIT_SCRIPT} ---> Running in 47121f7e5671 Removing intermediate container 47121f7e5671 ---> f0ae5763ce92 Step 3/9 : ARG ROBOT_FILE=${ROBOT_FILE} ---> Running in f1976de4cff1 Removing intermediate container f1976de4cff1 ---> eb7ec891c2c7 Step 4/9 : ENV ROBOT_WORKSPACE=/opt/robotworkspace ROBOT_FILE=$ROBOT_FILE TEST_ENV=$TEST_ENV ---> Running in 5ababf4d3806 Removing intermediate container 5ababf4d3806 ---> 9a3cd5e291ff Step 5/9 : RUN python3 -m pip -qq install --upgrade pip && python3 -m pip -qq install --upgrade --extra-index-url="https://nexus3.onap.org/repository/PyPi.staging/simple" 'robotframework-onap==0.6.0.*' --pre && python3 -m pip -qq install --upgrade confluent-kafka && python3 -m pip freeze ---> Running in fb362da14342 bcrypt==4.1.3 certifi==2024.6.2 cffi==1.17.0rc1 charset-normalizer==3.3.2 confluent-kafka==2.4.0 cryptography==42.0.8 decorator==5.1.1 deepdiff==7.0.1 dnspython==2.6.1 future==1.0.0 idna==3.7 Jinja2==3.1.4 jsonpath-rw==1.4.0 kafka-python==2.0.2 MarkupSafe==2.1.5 more-itertools==5.0.0 ordered-set==4.1.0 paramiko==3.4.0 pbr==6.0.0 ply==3.11 protobuf==5.27.2 pycparser==2.22 PyNaCl==1.5.0 PyYAML==6.0.2rc1 requests==2.32.3 robotframework==7.0.1 robotframework-onap==0.6.0.dev105 robotframework-requests==1.0a11 robotlibcore-temp==1.0.2 six==1.16.0 urllib3==2.2.2 Removing intermediate container fb362da14342 ---> 883fdc986b3d Step 6/9 : RUN mkdir -p ${ROBOT_WORKSPACE} ---> Running in 92491a93ae70 Removing intermediate container 92491a93ae70 ---> eebb060d9e8a Step 7/9 : COPY scripts/run-test.sh tests/ ${ROBOT_WORKSPACE}/ ---> 064bb994d280 Step 8/9 : WORKDIR ${ROBOT_WORKSPACE} ---> Running in ab701490590b Removing intermediate container ab701490590b ---> e0175ee69bf8 Step 9/9 : CMD ["sh", "-c", "./run-test.sh" ] ---> Running in bfd25930b280 Removing intermediate container bfd25930b280 ---> ce4c16e8accd Successfully built ce4c16e8accd Successfully tagged policy-csit-robot:latest top - 14:28:13 up 46 min, 0 users, load average: 2.56, 1.55, 0.63 Tasks: 205 total, 1 running, 130 sleeping, 0 stopped, 0 zombie %Cpu(s): 1.3 us, 0.3 sy, 0.0 ni, 97.6 id, 0.8 wa, 0.0 hi, 0.0 si, 0.0 st total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.5G 28G Swap: 1.0G 0B 1.0G NAMES STATUS policy-apex-pdp Up About a minute policy-pap Up About a minute policy-api Up About a minute kafka Up About a minute grafana Up About a minute zookeeper Up About a minute simulator Up About a minute mariadb Up About a minute prometheus Up About a minute CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 95cd78644f92 policy-apex-pdp 0.75% 182.1MiB / 31.41GiB 0.57% 35kB / 51.5kB 0B / 0B 50 8923e1ceb5da policy-pap 0.82% 502.3MiB / 31.41GiB 1.56% 123kB / 148kB 0B / 149MB 64 509428bc3020 policy-api 0.12% 473.1MiB / 31.41GiB 1.47% 989kB / 673kB 0B / 0B 54 9be495a3f092 kafka 1.60% 384.2MiB / 31.41GiB 1.19% 157kB / 150kB 0B / 561kB 85 5e03ed1e0f86 grafana 0.06% 65.15MiB / 31.41GiB 0.20% 24.4kB / 4.96kB 0B / 26.5MB 20 774ad6eb722a zookeeper 0.09% 100.5MiB / 31.41GiB 0.31% 57.3kB / 51.2kB 0B / 410kB 61 2ca4a8edeb7d simulator 0.06% 119.5MiB / 31.41GiB 0.37% 1.43kB / 0B 225kB / 0B 77 e465b8fab7c1 mariadb 0.02% 102MiB / 31.41GiB 0.32% 969kB / 1.22MB 11MB / 71.9MB 29 ee08e65f2188 prometheus 0.00% 20.35MiB / 31.41GiB 0.06% 67.5kB / 3.19kB 0B / 8.19kB 13 time="2024-07-03T14:28:15Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: apex-pdp-test.robot apex-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV: policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Apex-Pdp-Test & Apex-Slas policy-csit | ============================================================================== policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Runs Apex PDP Health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ExecuteApexSampleDomainPolicy | FAIL | policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 policy-csit | ------------------------------------------------------------------------------ policy-csit | ExecuteApexTestPnfPolicy | FAIL | policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 policy-csit | ------------------------------------------------------------------------------ policy-csit | ExecuteApexTestPnfPolicyWithMetadataSet | FAIL | policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy-apex-pdp is exporting prometheus metrics | FAIL | policy-csit | '# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds. policy-csit | # TYPE process_cpu_seconds_total counter policy-csit | process_cpu_seconds_total 8.34 policy-csit | # HELP process_start_time_seconds Start time of the process since unix epoch in seconds. policy-csit | # TYPE process_start_time_seconds gauge policy-csit | process_start_time_seconds 1.720016842817E9 policy-csit | # HELP process_open_fds Number of open file descriptors. policy-csit | # TYPE process_open_fds gauge policy-csit | process_open_fds 387.0 policy-csit | # HELP process_max_fds Maximum number of open file descriptors. policy-csit | # TYPE process_max_fds gauge policy-csit | process_max_fds 1048576.0 policy-csit | # HELP process_virtual_memory_bytes Virtual memory size in bytes. policy-csit | # TYPE process_virtual_memory_bytes gauge policy-csit | process_virtual_memory_bytes 1.0461679616E10 policy-csit | # HELP process_resident_memory_bytes Resident memory size in bytes. policy-csit | # TYPE process_resident_memory_bytes gauge policy-csit | process_resident_memory_bytes 1.99868416E8 policy-csit | [ Message content over the limit has been removed. ] policy-csit | # TYPE pdpa_policy_deployments_total counter policy-csit | # HELP jvm_memory_pool_allocated_bytes_created Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. policy-csit | # TYPE jvm_memory_pool_allocated_bytes_created gauge policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'profiled nmethods'",} 1.720016844472E9 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Old Gen",} 1.720016844501E9 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Eden Space",} 1.720016844501E9 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-profiled nmethods'",} 1.720016844501E9 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Survivor Space",} 1.720016844501E9 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="Compressed Class Space",} 1.720016844501E9 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="Metaspace",} 1.720016844501E9 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-nmethods'",} 1.720016844501E9 policy-csit | ' does not contain 'pdpa_policy_deployments_total{operation="deploy",status="TOTAL",} 3.0' policy-csit | ------------------------------------------------------------------------------ policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Pdp-Test | FAIL | policy-csit | 5 tests, 1 passed, 4 failed policy-csit | ============================================================================== policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Slas policy-csit | ============================================================================== policy-csit | Healthcheck :: Runs Apex PDP Health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyExecutionAndEventRateLowComplexity :: Validate that ... | FAIL | policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyExecutionAndEventRateModerateComplexity :: Validate ... | FAIL | policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyExecutionAndEventRateHighComplexity :: Validate that... | FAIL | policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 policy-csit | ------------------------------------------------------------------------------ policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyExecutionTimes :: Validate policy execution times us... | FAIL | policy-csit | Resolving variable '${resp['data']['result'][0]['value'][1]}' failed: IndexError: list index out of range policy-csit | ------------------------------------------------------------------------------ policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Slas | FAIL | policy-csit | 6 tests, 2 passed, 4 failed policy-csit | ============================================================================== policy-csit | Apex-Pdp-Test & Apex-Slas | FAIL | policy-csit | 11 tests, 3 passed, 8 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 8 policy-csit exited with code 8 NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes zookeeper Up 2 minutes simulator Up 2 minutes mariadb Up 2 minutes prometheus Up 2 minutes Shut down started! Collecting logs from docker compose containers... time="2024-07-03T14:29:22Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-03T14:29:22Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-03T14:29:23Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-03T14:29:23Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-03T14:29:23Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-03T14:29:24Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-03T14:29:24Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-03T14:29:24Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-03T14:29:25Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-03T14:29:25Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-03T14:29:25Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." time="2024-07-03T14:29:26Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." ======== Logs from grafana ======== grafana | logger=settings t=2024-07-03T14:26:52.217973073Z level=info msg="Starting Grafana" version=11.1.0 commit=5b85c4c2fcf5d32d4f68aaef345c53096359b2f1 branch=HEAD compiled=2024-07-03T14:26:52Z grafana | logger=settings t=2024-07-03T14:26:52.218479195Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-07-03T14:26:52.218493275Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-07-03T14:26:52.218497595Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-07-03T14:26:52.218501055Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-07-03T14:26:52.218503865Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-07-03T14:26:52.218509805Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-07-03T14:26:52.218518365Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-07-03T14:26:52.218525376Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-07-03T14:26:52.218528526Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-07-03T14:26:52.218531576Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-07-03T14:26:52.218535116Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-07-03T14:26:52.218539016Z level=info msg=Target target=[all] grafana | logger=settings t=2024-07-03T14:26:52.218555596Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-07-03T14:26:52.218559476Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-07-03T14:26:52.218562646Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-07-03T14:26:52.218565766Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-07-03T14:26:52.218569346Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-07-03T14:26:52.218572767Z level=info msg="App mode production" grafana | logger=featuremgmt t=2024-07-03T14:26:52.220970487Z level=info msg=FeatureToggles correlations=true managedPluginsInstall=true logRowsPopoverMenu=true prometheusDataplane=true lokiStructuredMetadata=true recordedQueriesMulti=true transformationsRedesign=true publicDashboards=true alertingSimplifiedRouting=true prometheusMetricEncyclopedia=true betterPageScrolling=true lokiQueryHints=true kubernetesPlaylists=true awsDatasourcesNewFormStyling=true alertingInsights=true annotationPermissionUpdate=true topnav=true lokiQuerySplitting=true lokiMetricDataplane=true logsExploreTableVisualisation=true nestedFolders=true awsAsyncQueryCaching=true ssoSettingsApi=true exploreMetrics=true angularDeprecationUI=true cloudWatchCrossAccountQuerying=true logsInfiniteScrolling=true dataplaneFrontendFallback=true dashgpt=true cloudWatchNewLabelParsing=true alertingNoDataErrorExecution=true prometheusConfigOverhaulAuth=true exploreContentOutline=true recoveryThreshold=true panelMonitoring=true influxdbBackendMigration=true logsContextDatasourceUi=true grafana | logger=sqlstore t=2024-07-03T14:26:52.221039168Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-07-03T14:26:52.221187351Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-07-03T14:26:52.22497914Z level=info msg="Locking database" grafana | logger=migrator t=2024-07-03T14:26:52.224992811Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-07-03T14:26:52.225761407Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-07-03T14:26:52.226705747Z level=info msg="Migration successfully executed" id="create migration_log table" duration=944.02µs grafana | logger=migrator t=2024-07-03T14:26:52.245337577Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-07-03T14:26:52.246637825Z level=info msg="Migration successfully executed" id="create user table" duration=1.299858ms grafana | logger=migrator t=2024-07-03T14:26:52.257104893Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-07-03T14:26:52.25838226Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.276287ms grafana | logger=migrator t=2024-07-03T14:26:52.267170594Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-07-03T14:26:52.268058482Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=892.148µs grafana | logger=migrator t=2024-07-03T14:26:52.274141651Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-07-03T14:26:52.274783444Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=644.153µs grafana | logger=migrator t=2024-07-03T14:26:52.280073585Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-07-03T14:26:52.281032625Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=959.61µs grafana | logger=migrator t=2024-07-03T14:26:52.288693315Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-07-03T14:26:52.292159338Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.481313ms grafana | logger=migrator t=2024-07-03T14:26:52.296199772Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-07-03T14:26:52.29701028Z level=info msg="Migration successfully executed" id="create user table v2" duration=811.128µs grafana | logger=migrator t=2024-07-03T14:26:52.306630411Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-07-03T14:26:52.307612161Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=986µs grafana | logger=migrator t=2024-07-03T14:26:52.31993627Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-07-03T14:26:52.32092932Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=997.9µs grafana | logger=migrator t=2024-07-03T14:26:52.332547523Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-07-03T14:26:52.333018054Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=474.051µs grafana | logger=migrator t=2024-07-03T14:26:52.339423288Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-07-03T14:26:52.34000848Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=585.872µs grafana | logger=migrator t=2024-07-03T14:26:52.342969142Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-07-03T14:26:52.344024024Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.054622ms grafana | logger=migrator t=2024-07-03T14:26:52.350853268Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-07-03T14:26:52.350894848Z level=info msg="Migration successfully executed" id="Update user table charset" duration=39.671µs grafana | logger=migrator t=2024-07-03T14:26:52.356011276Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-07-03T14:26:52.357468566Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.45877ms grafana | logger=migrator t=2024-07-03T14:26:52.361216045Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-07-03T14:26:52.3614499Z level=info msg="Migration successfully executed" id="Add missing user data" duration=234.435µs grafana | logger=migrator t=2024-07-03T14:26:52.366659219Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2024-07-03T14:26:52.367863044Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.204545ms grafana | logger=migrator t=2024-07-03T14:26:52.374288308Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-07-03T14:26:52.375131877Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=845.499µs grafana | logger=migrator t=2024-07-03T14:26:52.385727528Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-07-03T14:26:52.387130918Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.40697ms grafana | logger=migrator t=2024-07-03T14:26:52.391148642Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-07-03T14:26:52.399046887Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.897305ms grafana | logger=migrator t=2024-07-03T14:26:52.408007925Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2024-07-03T14:26:52.40918454Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.178415ms grafana | logger=migrator t=2024-07-03T14:26:52.41349207Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2024-07-03T14:26:52.413704874Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=212.064µs grafana | logger=migrator t=2024-07-03T14:26:52.416728527Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2024-07-03T14:26:52.41729989Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=572.163µs grafana | logger=migrator t=2024-07-03T14:26:52.421042958Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2024-07-03T14:26:52.421273773Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=230.735µs grafana | logger=migrator t=2024-07-03T14:26:52.426724037Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2024-07-03T14:26:52.427042114Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=318.047µs grafana | logger=migrator t=2024-07-03T14:26:52.433747374Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2024-07-03T14:26:52.434044831Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=305.868µs grafana | logger=migrator t=2024-07-03T14:26:52.443936848Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-07-03T14:26:52.445019331Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.085443ms grafana | logger=migrator t=2024-07-03T14:26:52.452590319Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-07-03T14:26:52.453646922Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.058283ms grafana | logger=migrator t=2024-07-03T14:26:52.46076459Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-07-03T14:26:52.461413954Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=654.984µs grafana | logger=migrator t=2024-07-03T14:26:52.46649149Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-07-03T14:26:52.467034081Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=542.871µs grafana | logger=migrator t=2024-07-03T14:26:52.474179892Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-07-03T14:26:52.475154272Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=974.041µs grafana | logger=migrator t=2024-07-03T14:26:52.480616476Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-07-03T14:26:52.480663287Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=47.521µs grafana | logger=migrator t=2024-07-03T14:26:52.486087381Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-07-03T14:26:52.487329447Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.235356ms grafana | logger=migrator t=2024-07-03T14:26:52.491336241Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-07-03T14:26:52.491964164Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=627.243µs grafana | logger=migrator t=2024-07-03T14:26:52.500814179Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-07-03T14:26:52.501389502Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=576.873µs grafana | logger=migrator t=2024-07-03T14:26:52.509698436Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-07-03T14:26:52.510277448Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=581.292µs grafana | logger=migrator t=2024-07-03T14:26:52.516515348Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-07-03T14:26:52.51898779Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.474312ms grafana | logger=migrator t=2024-07-03T14:26:52.524567087Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-07-03T14:26:52.525477426Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=910.649µs grafana | logger=migrator t=2024-07-03T14:26:52.531178115Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-07-03T14:26:52.531993973Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=815.418µs grafana | logger=migrator t=2024-07-03T14:26:52.539344116Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-07-03T14:26:52.53999033Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=649.764µs grafana | logger=migrator t=2024-07-03T14:26:52.547368374Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-07-03T14:26:52.547959227Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=591.733µs grafana | logger=migrator t=2024-07-03T14:26:52.551873949Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-07-03T14:26:52.552464932Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=586.052µs grafana | logger=migrator t=2024-07-03T14:26:52.559717713Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-07-03T14:26:52.560099612Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=381.969µs grafana | logger=migrator t=2024-07-03T14:26:52.566474765Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-07-03T14:26:52.567276322Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=801.016µs grafana | logger=migrator t=2024-07-03T14:26:52.573619784Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2024-07-03T14:26:52.574148176Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=527.832µs grafana | logger=migrator t=2024-07-03T14:26:52.579716753Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-07-03T14:26:52.580441278Z level=info msg="Migration successfully executed" id="create star table" duration=724.685µs grafana | logger=migrator t=2024-07-03T14:26:52.590038168Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-07-03T14:26:52.591361956Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.314488ms grafana | logger=migrator t=2024-07-03T14:26:52.595959162Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-07-03T14:26:52.596854922Z level=info msg="Migration successfully executed" id="create org table v1" duration=896.17µs grafana | logger=migrator t=2024-07-03T14:26:52.601029469Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2024-07-03T14:26:52.601805805Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=776.006µs grafana | logger=migrator t=2024-07-03T14:26:52.607141356Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-07-03T14:26:52.608186419Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.045253ms grafana | logger=migrator t=2024-07-03T14:26:52.614118423Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-07-03T14:26:52.61542417Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.305537ms grafana | logger=migrator t=2024-07-03T14:26:52.624192233Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-07-03T14:26:52.624978211Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=788.658µs grafana | logger=migrator t=2024-07-03T14:26:52.631452766Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-07-03T14:26:52.633084801Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.631885ms grafana | logger=migrator t=2024-07-03T14:26:52.638391522Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-07-03T14:26:52.638428673Z level=info msg="Migration successfully executed" id="Update org table charset" duration=38.251µs grafana | logger=migrator t=2024-07-03T14:26:52.643245513Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-07-03T14:26:52.643267913Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=23.11µs grafana | logger=migrator t=2024-07-03T14:26:52.648987373Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-07-03T14:26:52.649183587Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=195.774µs grafana | logger=migrator t=2024-07-03T14:26:52.655551971Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-07-03T14:26:52.656291246Z level=info msg="Migration successfully executed" id="create dashboard table" duration=738.555µs grafana | logger=migrator t=2024-07-03T14:26:52.660653757Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-07-03T14:26:52.661431214Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=771.677µs grafana | logger=migrator t=2024-07-03T14:26:52.667615744Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-07-03T14:26:52.668493062Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=876.688µs grafana | logger=migrator t=2024-07-03T14:26:52.673693741Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-07-03T14:26:52.674997559Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.303148ms grafana | logger=migrator t=2024-07-03T14:26:52.681885263Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-07-03T14:26:52.683364454Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.483011ms grafana | logger=migrator t=2024-07-03T14:26:52.690768699Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-07-03T14:26:52.691608627Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=840.589µs grafana | logger=migrator t=2024-07-03T14:26:52.695464637Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-07-03T14:26:52.701594006Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.125778ms grafana | logger=migrator t=2024-07-03T14:26:52.710082733Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-07-03T14:26:52.710976192Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=897.169µs grafana | logger=migrator t=2024-07-03T14:26:52.717230684Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-07-03T14:26:52.718252875Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.022431ms grafana | logger=migrator t=2024-07-03T14:26:52.723076636Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-07-03T14:26:52.724040216Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=964.94µs grafana | logger=migrator t=2024-07-03T14:26:52.729819857Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-07-03T14:26:52.730185694Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=366.437µs grafana | logger=migrator t=2024-07-03T14:26:52.734026376Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2024-07-03T14:26:52.73659642Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=2.563223ms grafana | logger=migrator t=2024-07-03T14:26:52.741307057Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2024-07-03T14:26:52.741554832Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=248.685µs grafana | logger=migrator t=2024-07-03T14:26:52.749102941Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2024-07-03T14:26:52.751288707Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.185726ms grafana | logger=migrator t=2024-07-03T14:26:52.756173669Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2024-07-03T14:26:52.75816273Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.988761ms grafana | logger=migrator t=2024-07-03T14:26:52.761532302Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2024-07-03T14:26:52.763681946Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.157165ms grafana | logger=migrator t=2024-07-03T14:26:52.768897986Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2024-07-03T14:26:52.770150942Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.252626ms grafana | logger=migrator t=2024-07-03T14:26:52.779426446Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2024-07-03T14:26:52.781356067Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.923281ms grafana | logger=migrator t=2024-07-03T14:26:52.786104836Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2024-07-03T14:26:52.78769086Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.591803ms grafana | logger=migrator t=2024-07-03T14:26:52.793277787Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2024-07-03T14:26:52.794063103Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=784.656µs grafana | logger=migrator t=2024-07-03T14:26:52.798151789Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2024-07-03T14:26:52.79818536Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=34.091µs grafana | logger=migrator t=2024-07-03T14:26:52.802066781Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2024-07-03T14:26:52.802104551Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=38.61µs grafana | logger=migrator t=2024-07-03T14:26:52.808931014Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2024-07-03T14:26:52.812490509Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.560165ms grafana | logger=migrator t=2024-07-03T14:26:52.815313269Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2024-07-03T14:26:52.81731374Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.999941ms grafana | logger=migrator t=2024-07-03T14:26:52.820116798Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2024-07-03T14:26:52.821987348Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.86454ms grafana | logger=migrator t=2024-07-03T14:26:52.826169536Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2024-07-03T14:26:52.828018074Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.848148ms grafana | logger=migrator t=2024-07-03T14:26:52.830459486Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2024-07-03T14:26:52.830701661Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=242.235µs grafana | logger=migrator t=2024-07-03T14:26:52.832905927Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2024-07-03T14:26:52.834275785Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.369918ms grafana | logger=migrator t=2024-07-03T14:26:52.839487694Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2024-07-03T14:26:52.84067652Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.188986ms grafana | logger=migrator t=2024-07-03T14:26:52.843609691Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2024-07-03T14:26:52.843630582Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=21.641µs grafana | logger=migrator t=2024-07-03T14:26:52.846480531Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2024-07-03T14:26:52.847283368Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=802.507µs grafana | logger=migrator t=2024-07-03T14:26:52.85167145Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2024-07-03T14:26:52.852789833Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.117793ms grafana | logger=migrator t=2024-07-03T14:26:52.855697314Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2024-07-03T14:26:52.862540158Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.838564ms grafana | logger=migrator t=2024-07-03T14:26:52.865416188Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2024-07-03T14:26:52.866265896Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=849.378µs grafana | logger=migrator t=2024-07-03T14:26:52.868896461Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2024-07-03T14:26:52.869462622Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=566.121µs grafana | logger=migrator t=2024-07-03T14:26:52.873460626Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2024-07-03T14:26:52.874037459Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=576.693µs grafana | logger=migrator t=2024-07-03T14:26:52.876627773Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2024-07-03T14:26:52.877126163Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=498.3µs grafana | logger=migrator t=2024-07-03T14:26:52.880142637Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2024-07-03T14:26:52.880970924Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=828.037µs grafana | logger=migrator t=2024-07-03T14:26:52.885597541Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2024-07-03T14:26:52.887662594Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.064753ms grafana | logger=migrator t=2024-07-03T14:26:52.890296319Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2024-07-03T14:26:52.891326651Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.030152ms grafana | logger=migrator t=2024-07-03T14:26:52.894344844Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2024-07-03T14:26:52.89462356Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=278.526µs grafana | logger=migrator t=2024-07-03T14:26:52.899319308Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2024-07-03T14:26:52.899440551Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=121.623µs grafana | logger=migrator t=2024-07-03T14:26:52.901878422Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2024-07-03T14:26:52.902406413Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=527.881µs grafana | logger=migrator t=2024-07-03T14:26:52.904846034Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2024-07-03T14:26:52.906342285Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.496021ms grafana | logger=migrator t=2024-07-03T14:26:52.910454212Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2024-07-03T14:26:52.912667237Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.212755ms grafana | logger=migrator t=2024-07-03T14:26:52.915674131Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2024-07-03T14:26:52.916644381Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=970.23µs grafana | logger=migrator t=2024-07-03T14:26:52.920003861Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2024-07-03T14:26:52.925879704Z level=info msg="Migration successfully executed" id="create data_source table" duration=5.874863ms grafana | logger=migrator t=2024-07-03T14:26:52.931538243Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2024-07-03T14:26:52.932225197Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=694.274µs grafana | logger=migrator t=2024-07-03T14:26:52.934888793Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2024-07-03T14:26:52.935803533Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=914.62µs grafana | logger=migrator t=2024-07-03T14:26:52.938633732Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2024-07-03T14:26:52.939800266Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.166304ms grafana | logger=migrator t=2024-07-03T14:26:52.945174789Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2024-07-03T14:26:52.946049617Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=873.868µs grafana | logger=migrator t=2024-07-03T14:26:52.948671132Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2024-07-03T14:26:52.955031686Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.360164ms grafana | logger=migrator t=2024-07-03T14:26:52.958007968Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2024-07-03T14:26:52.95907195Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.063342ms grafana | logger=migrator t=2024-07-03T14:26:52.96383985Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2024-07-03T14:26:52.964413952Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=573.902µs grafana | logger=migrator t=2024-07-03T14:26:52.966773291Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2024-07-03T14:26:52.967592268Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=816.737µs grafana | logger=migrator t=2024-07-03T14:26:52.971291756Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2024-07-03T14:26:52.972124804Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=833.959µs grafana | logger=migrator t=2024-07-03T14:26:52.977302032Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2024-07-03T14:26:52.979565519Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.262687ms grafana | logger=migrator t=2024-07-03T14:26:52.982595912Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2024-07-03T14:26:52.985250448Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.654136ms grafana | logger=migrator t=2024-07-03T14:26:52.988098818Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2024-07-03T14:26:52.988128418Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=30.06µs grafana | logger=migrator t=2024-07-03T14:26:52.993691825Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2024-07-03T14:26:52.993880679Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=188.994µs grafana | logger=migrator t=2024-07-03T14:26:52.996233438Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2024-07-03T14:26:52.998812052Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.577144ms grafana | logger=migrator t=2024-07-03T14:26:53.001754064Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2024-07-03T14:26:53.001954598Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=207.814µs grafana | logger=migrator t=2024-07-03T14:26:53.004777229Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2024-07-03T14:26:53.005016755Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=239.415µs grafana | logger=migrator t=2024-07-03T14:26:53.013775062Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2024-07-03T14:26:53.016170077Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.394325ms grafana | logger=migrator t=2024-07-03T14:26:53.024462198Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2024-07-03T14:26:53.024744065Z level=info msg="Migration successfully executed" id="Update uid value" duration=282.017µs grafana | logger=migrator t=2024-07-03T14:26:53.032489604Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2024-07-03T14:26:53.034366168Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.855463ms grafana | logger=migrator t=2024-07-03T14:26:53.044350128Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2024-07-03T14:26:53.045159248Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=810.91µs grafana | logger=migrator t=2024-07-03T14:26:53.054026163Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2024-07-03T14:26:53.05953611Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=5.508137ms grafana | logger=migrator t=2024-07-03T14:26:53.067631867Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2024-07-03T14:26:53.070097755Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.465817ms grafana | logger=migrator t=2024-07-03T14:26:53.078140341Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2024-07-03T14:26:53.078880578Z level=info msg="Migration successfully executed" id="create api_key table" duration=742.166µs grafana | logger=migrator t=2024-07-03T14:26:53.115912054Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2024-07-03T14:26:53.120209724Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=4.296549ms grafana | logger=migrator t=2024-07-03T14:26:53.125522235Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2024-07-03T14:26:53.126651462Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.134217ms grafana | logger=migrator t=2024-07-03T14:26:53.131993416Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2024-07-03T14:26:53.132790264Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=796.988µs grafana | logger=migrator t=2024-07-03T14:26:53.135756072Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2024-07-03T14:26:53.13652831Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=770.108µs grafana | logger=migrator t=2024-07-03T14:26:53.142718504Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2024-07-03T14:26:53.14342726Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=708.646µs grafana | logger=migrator t=2024-07-03T14:26:53.145748914Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2024-07-03T14:26:53.14643425Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=685.216µs grafana | logger=migrator t=2024-07-03T14:26:53.149597063Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2024-07-03T14:26:53.157962936Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.365053ms grafana | logger=migrator t=2024-07-03T14:26:53.164267782Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2024-07-03T14:26:53.16633523Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=2.072879ms grafana | logger=migrator t=2024-07-03T14:26:53.170600949Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2024-07-03T14:26:53.172353689Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.75789ms grafana | logger=migrator t=2024-07-03T14:26:53.176339071Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2024-07-03T14:26:53.17715809Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=818.069µs grafana | logger=migrator t=2024-07-03T14:26:53.179990166Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2024-07-03T14:26:53.180848885Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=858.939µs grafana | logger=migrator t=2024-07-03T14:26:53.184782457Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2024-07-03T14:26:53.185089224Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=307.907µs grafana | logger=migrator t=2024-07-03T14:26:53.188955593Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2024-07-03T14:26:53.189806242Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=850.209µs grafana | logger=migrator t=2024-07-03T14:26:53.194707726Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2024-07-03T14:26:53.194755837Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=48.061µs grafana | logger=migrator t=2024-07-03T14:26:53.19836794Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2024-07-03T14:26:53.200928759Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.558029ms grafana | logger=migrator t=2024-07-03T14:26:53.205662349Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2024-07-03T14:26:53.208146717Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.484158ms grafana | logger=migrator t=2024-07-03T14:26:53.314017465Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2024-07-03T14:26:53.314353103Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=338.688µs grafana | logger=migrator t=2024-07-03T14:26:53.414293804Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2024-07-03T14:26:53.418670535Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.379091ms grafana | logger=migrator t=2024-07-03T14:26:53.423833495Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2024-07-03T14:26:53.426344873Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.511339ms grafana | logger=migrator t=2024-07-03T14:26:53.429986347Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2024-07-03T14:26:53.430700053Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=713.436µs grafana | logger=migrator t=2024-07-03T14:26:53.434123063Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2024-07-03T14:26:53.434616924Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=495.141µs grafana | logger=migrator t=2024-07-03T14:26:53.440157382Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2024-07-03T14:26:53.441426441Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.266039ms grafana | logger=migrator t=2024-07-03T14:26:53.446232663Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2024-07-03T14:26:53.447546312Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.313ms grafana | logger=migrator t=2024-07-03T14:26:53.451930084Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2024-07-03T14:26:53.453250914Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.32075ms grafana | logger=migrator t=2024-07-03T14:26:53.458300571Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2024-07-03T14:26:53.459687304Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.386483ms grafana | logger=migrator t=2024-07-03T14:26:53.464800262Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2024-07-03T14:26:53.464865184Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=65.482µs grafana | logger=migrator t=2024-07-03T14:26:53.468323224Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2024-07-03T14:26:53.468396825Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=75.772µs grafana | logger=migrator t=2024-07-03T14:26:53.475266484Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2024-07-03T14:26:53.480002493Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.736229ms grafana | logger=migrator t=2024-07-03T14:26:53.484331294Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2024-07-03T14:26:53.486945744Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.613959ms grafana | logger=migrator t=2024-07-03T14:26:53.49021323Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2024-07-03T14:26:53.490278281Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=64.351µs grafana | logger=migrator t=2024-07-03T14:26:53.492836171Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2024-07-03T14:26:53.493525726Z level=info msg="Migration successfully executed" id="create quota table v1" duration=689.645µs grafana | logger=migrator t=2024-07-03T14:26:53.498063791Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2024-07-03T14:26:53.49931909Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.254569ms grafana | logger=migrator t=2024-07-03T14:26:53.502994275Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2024-07-03T14:26:53.503031626Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=38.301µs grafana | logger=migrator t=2024-07-03T14:26:53.506451365Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2024-07-03T14:26:53.507189432Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=736.267µs grafana | logger=migrator t=2024-07-03T14:26:53.511540112Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2024-07-03T14:26:53.512426554Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=885.852µs grafana | logger=migrator t=2024-07-03T14:26:53.516724323Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2024-07-03T14:26:53.521250148Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.525695ms grafana | logger=migrator t=2024-07-03T14:26:53.525854053Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2024-07-03T14:26:53.525875574Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=22.181µs grafana | logger=migrator t=2024-07-03T14:26:53.533760126Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2024-07-03T14:26:53.535705152Z level=info msg="Migration successfully executed" id="create session table" duration=1.945006ms grafana | logger=migrator t=2024-07-03T14:26:53.539253413Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2024-07-03T14:26:53.539443408Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=190.475µs grafana | logger=migrator t=2024-07-03T14:26:53.54386566Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2024-07-03T14:26:53.543946932Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=81.652µs grafana | logger=migrator t=2024-07-03T14:26:53.54689304Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2024-07-03T14:26:53.547595226Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=702.686µs grafana | logger=migrator t=2024-07-03T14:26:53.551537818Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2024-07-03T14:26:53.552771896Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.233448ms grafana | logger=migrator t=2024-07-03T14:26:53.584797166Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2024-07-03T14:26:53.584845368Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=47.311µs grafana | logger=migrator t=2024-07-03T14:26:53.645946181Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2024-07-03T14:26:53.645988812Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=45.911µs grafana | logger=migrator t=2024-07-03T14:26:53.648895029Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2024-07-03T14:26:53.653350452Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.456383ms grafana | logger=migrator t=2024-07-03T14:26:53.657395695Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2024-07-03T14:26:53.660405065Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.00875ms grafana | logger=migrator t=2024-07-03T14:26:53.663522167Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2024-07-03T14:26:53.6635963Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=74.553µs grafana | logger=migrator t=2024-07-03T14:26:53.666279171Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2024-07-03T14:26:53.666355602Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=76.461µs grafana | logger=migrator t=2024-07-03T14:26:53.670436047Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2024-07-03T14:26:53.671915861Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.480994ms grafana | logger=migrator t=2024-07-03T14:26:53.675262779Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2024-07-03T14:26:53.67530096Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=39.861µs grafana | logger=migrator t=2024-07-03T14:26:53.678590226Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2024-07-03T14:26:53.681707138Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.116032ms grafana | logger=migrator t=2024-07-03T14:26:53.711560159Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2024-07-03T14:26:53.711798134Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=236.155µs grafana | logger=migrator t=2024-07-03T14:26:53.71509096Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2024-07-03T14:26:53.719960043Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=4.870063ms grafana | logger=migrator t=2024-07-03T14:26:53.72545724Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2024-07-03T14:26:53.728549771Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.091831ms grafana | logger=migrator t=2024-07-03T14:26:53.74449789Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2024-07-03T14:26:53.744570501Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=73.031µs grafana | logger=migrator t=2024-07-03T14:26:53.77996358Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2024-07-03T14:26:53.780752158Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=788.438µs grafana | logger=migrator t=2024-07-03T14:26:53.783908811Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2024-07-03T14:26:53.78470346Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=793.399µs grafana | logger=migrator t=2024-07-03T14:26:53.788012296Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2024-07-03T14:26:53.78904793Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.036624ms grafana | logger=migrator t=2024-07-03T14:26:53.793137995Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2024-07-03T14:26:53.793908023Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=769.248µs grafana | logger=migrator t=2024-07-03T14:26:53.797124847Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2024-07-03T14:26:53.797904415Z level=info msg="Migration successfully executed" id="add index alert state" duration=779.358µs grafana | logger=migrator t=2024-07-03T14:26:53.801398286Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2024-07-03T14:26:53.802173484Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=774.828µs grafana | logger=migrator t=2024-07-03T14:26:53.806066473Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2024-07-03T14:26:53.806687109Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=619.975µs grafana | logger=migrator t=2024-07-03T14:26:53.809938074Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2024-07-03T14:26:53.811305675Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.367162ms grafana | logger=migrator t=2024-07-03T14:26:53.814705573Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2024-07-03T14:26:53.816024824Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.318831ms grafana | logger=migrator t=2024-07-03T14:26:53.820049187Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2024-07-03T14:26:53.830912708Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.857341ms grafana | logger=migrator t=2024-07-03T14:26:53.840667714Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2024-07-03T14:26:53.841271948Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=605.034µs grafana | logger=migrator t=2024-07-03T14:26:53.844785029Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2024-07-03T14:26:53.846045759Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.26098ms grafana | logger=migrator t=2024-07-03T14:26:53.851394372Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2024-07-03T14:26:53.851701089Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=306.827µs grafana | logger=migrator t=2024-07-03T14:26:53.854606557Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2024-07-03T14:26:53.855141419Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=534.372µs grafana | logger=migrator t=2024-07-03T14:26:53.858176889Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2024-07-03T14:26:53.859307615Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.130996ms grafana | logger=migrator t=2024-07-03T14:26:53.863760248Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2024-07-03T14:26:53.86901237Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.252952ms grafana | logger=migrator t=2024-07-03T14:26:53.872534741Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2024-07-03T14:26:53.876261727Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.724546ms grafana | logger=migrator t=2024-07-03T14:26:53.879437301Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2024-07-03T14:26:53.883121916Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.684775ms grafana | logger=migrator t=2024-07-03T14:26:53.887060857Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2024-07-03T14:26:53.890828244Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.766338ms grafana | logger=migrator t=2024-07-03T14:26:53.894298194Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2024-07-03T14:26:53.8958674Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.570126ms grafana | logger=migrator t=2024-07-03T14:26:53.900318743Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2024-07-03T14:26:53.900357124Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=42.631µs grafana | logger=migrator t=2024-07-03T14:26:53.906145858Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2024-07-03T14:26:53.906206029Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=69.951µs grafana | logger=migrator t=2024-07-03T14:26:53.910908959Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2024-07-03T14:26:53.914396059Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=3.484301ms grafana | logger=migrator t=2024-07-03T14:26:53.931550275Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-07-03T14:26:53.9330222Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.475565ms grafana | logger=migrator t=2024-07-03T14:26:53.936565472Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2024-07-03T14:26:53.937699397Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.133365ms grafana | logger=migrator t=2024-07-03T14:26:53.941123487Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2024-07-03T14:26:53.942051399Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=927.142µs grafana | logger=migrator t=2024-07-03T14:26:53.94646668Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-07-03T14:26:53.947513265Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.046505ms grafana | logger=migrator t=2024-07-03T14:26:53.956460392Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2024-07-03T14:26:53.965179533Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=8.715621ms grafana | logger=migrator t=2024-07-03T14:26:53.970173179Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2024-07-03T14:26:53.973865944Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.692235ms grafana | logger=migrator t=2024-07-03T14:26:53.980076507Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2024-07-03T14:26:53.980502218Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=426.551µs grafana | logger=migrator t=2024-07-03T14:26:53.984023949Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2024-07-03T14:26:53.985070593Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.046594ms grafana | logger=migrator t=2024-07-03T14:26:53.990466908Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2024-07-03T14:26:53.991432531Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=970.213µs grafana | logger=migrator t=2024-07-03T14:26:53.994321938Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2024-07-03T14:26:53.998139306Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.816858ms grafana | logger=migrator t=2024-07-03T14:26:54.001265858Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2024-07-03T14:26:54.001334229Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=69.651µs grafana | logger=migrator t=2024-07-03T14:26:54.006113834Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2024-07-03T14:26:54.00695653Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=844.156µs grafana | logger=migrator t=2024-07-03T14:26:54.010078704Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2024-07-03T14:26:54.010884241Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=805.396µs grafana | logger=migrator t=2024-07-03T14:26:54.015087529Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2024-07-03T14:26:54.015174361Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=88.212µs grafana | logger=migrator t=2024-07-03T14:26:54.019603235Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2024-07-03T14:26:54.02180832Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=2.237376ms grafana | logger=migrator t=2024-07-03T14:26:54.029030732Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2024-07-03T14:26:54.029706296Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=675.354µs grafana | logger=migrator t=2024-07-03T14:26:54.032962044Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2024-07-03T14:26:54.033609989Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=648.385µs grafana | logger=migrator t=2024-07-03T14:26:54.03937406Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2024-07-03T14:26:54.040919831Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.546021ms grafana | logger=migrator t=2024-07-03T14:26:54.045714432Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2024-07-03T14:26:54.047440089Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.722307ms grafana | logger=migrator t=2024-07-03T14:26:54.05277913Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2024-07-03T14:26:54.055110449Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=2.335819ms grafana | logger=migrator t=2024-07-03T14:26:54.063020695Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2024-07-03T14:26:54.063057756Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=36.931µs grafana | logger=migrator t=2024-07-03T14:26:54.068858958Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2024-07-03T14:26:54.072175747Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.321439ms grafana | logger=migrator t=2024-07-03T14:26:54.075313633Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2024-07-03T14:26:54.076491238Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.180775ms grafana | logger=migrator t=2024-07-03T14:26:54.084763381Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2024-07-03T14:26:54.0894677Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.700498ms grafana | logger=migrator t=2024-07-03T14:26:54.095189499Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2024-07-03T14:26:54.096015236Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=826.227µs grafana | logger=migrator t=2024-07-03T14:26:54.103582024Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2024-07-03T14:26:54.10484985Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.266896ms grafana | logger=migrator t=2024-07-03T14:26:54.108244291Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2024-07-03T14:26:54.109280693Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.036302ms grafana | logger=migrator t=2024-07-03T14:26:54.116662447Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2024-07-03T14:26:54.127269149Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=10.606622ms grafana | logger=migrator t=2024-07-03T14:26:54.13021901Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2024-07-03T14:26:54.130749831Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=530.781µs grafana | logger=migrator t=2024-07-03T14:26:54.134777906Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2024-07-03T14:26:54.135777366Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=999.15µs grafana | logger=migrator t=2024-07-03T14:26:54.138898181Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2024-07-03T14:26:54.139208597Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=310.986µs grafana | logger=migrator t=2024-07-03T14:26:54.142411274Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2024-07-03T14:26:54.142933656Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=521.942µs grafana | logger=migrator t=2024-07-03T14:26:54.147120793Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2024-07-03T14:26:54.147318847Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=197.894µs grafana | logger=migrator t=2024-07-03T14:26:54.152411584Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2024-07-03T14:26:54.15751197Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.093127ms grafana | logger=migrator t=2024-07-03T14:26:54.16421886Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2024-07-03T14:26:54.168640482Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.422702ms grafana | logger=migrator t=2024-07-03T14:26:54.174095547Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2024-07-03T14:26:54.175075667Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=979.43µs grafana | logger=migrator t=2024-07-03T14:26:54.182459251Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2024-07-03T14:26:54.183467852Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.012261ms grafana | logger=migrator t=2024-07-03T14:26:54.186659588Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2024-07-03T14:26:54.186895023Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=235.405µs grafana | logger=migrator t=2024-07-03T14:26:54.192303766Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2024-07-03T14:26:54.197092976Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.79454ms grafana | logger=migrator t=2024-07-03T14:26:54.202493239Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2024-07-03T14:26:54.203658043Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.168204ms grafana | logger=migrator t=2024-07-03T14:26:54.206599795Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2024-07-03T14:26:54.206760238Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=157.463µs grafana | logger=migrator t=2024-07-03T14:26:54.210809223Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2024-07-03T14:26:54.21115982Z level=info msg="Migration successfully executed" id="Move region to single row" duration=351.057µs grafana | logger=migrator t=2024-07-03T14:26:54.221321632Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2024-07-03T14:26:54.222442605Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.130013ms grafana | logger=migrator t=2024-07-03T14:26:54.227565383Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2024-07-03T14:26:54.228502202Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=935.159µs grafana | logger=migrator t=2024-07-03T14:26:54.231841322Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-07-03T14:26:54.233311063Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.464491ms grafana | logger=migrator t=2024-07-03T14:26:54.240190616Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-07-03T14:26:54.241081755Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=890.279µs grafana | logger=migrator t=2024-07-03T14:26:54.304304644Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2024-07-03T14:26:54.305729905Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.425321ms grafana | logger=migrator t=2024-07-03T14:26:54.310428562Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2024-07-03T14:26:54.311773791Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.349799ms grafana | logger=migrator t=2024-07-03T14:26:54.320432892Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2024-07-03T14:26:54.320533614Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=101.383µs grafana | logger=migrator t=2024-07-03T14:26:54.325940976Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2024-07-03T14:26:54.32851057Z level=info msg="Migration successfully executed" id="create test_data table" duration=2.569564ms grafana | logger=migrator t=2024-07-03T14:26:54.334677009Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2024-07-03T14:26:54.335510896Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=833.557µs grafana | logger=migrator t=2024-07-03T14:26:54.346321572Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2024-07-03T14:26:54.348218402Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.89683ms grafana | logger=migrator t=2024-07-03T14:26:54.356655028Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2024-07-03T14:26:54.358084877Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.429409ms grafana | logger=migrator t=2024-07-03T14:26:54.361600162Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2024-07-03T14:26:54.361889887Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=289.996µs grafana | logger=migrator t=2024-07-03T14:26:54.367357982Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2024-07-03T14:26:54.36776118Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=403.248µs grafana | logger=migrator t=2024-07-03T14:26:54.3754497Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2024-07-03T14:26:54.375587233Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=138.203µs grafana | logger=migrator t=2024-07-03T14:26:54.381434385Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2024-07-03T14:26:54.38309262Z level=info msg="Migration successfully executed" id="create team table" duration=1.656924ms grafana | logger=migrator t=2024-07-03T14:26:54.387695326Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2024-07-03T14:26:54.389439313Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.745067ms grafana | logger=migrator t=2024-07-03T14:26:54.397874079Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2024-07-03T14:26:54.399469641Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.595222ms grafana | logger=migrator t=2024-07-03T14:26:54.40318477Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2024-07-03T14:26:54.407814606Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.628555ms grafana | logger=migrator t=2024-07-03T14:26:54.410702406Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2024-07-03T14:26:54.410954691Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=252.045µs grafana | logger=migrator t=2024-07-03T14:26:54.415149509Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2024-07-03T14:26:54.41617051Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.021081ms grafana | logger=migrator t=2024-07-03T14:26:54.422044983Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2024-07-03T14:26:54.422894731Z level=info msg="Migration successfully executed" id="create team member table" duration=846.248µs grafana | logger=migrator t=2024-07-03T14:26:54.430884577Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2024-07-03T14:26:54.432533252Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.647705ms grafana | logger=migrator t=2024-07-03T14:26:54.438948526Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2024-07-03T14:26:54.440118211Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.168675ms grafana | logger=migrator t=2024-07-03T14:26:54.443129294Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2024-07-03T14:26:54.444158395Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.029331ms grafana | logger=migrator t=2024-07-03T14:26:54.44869061Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2024-07-03T14:26:54.453935889Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.240219ms grafana | logger=migrator t=2024-07-03T14:26:54.459109206Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2024-07-03T14:26:54.464273525Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=5.164189ms grafana | logger=migrator t=2024-07-03T14:26:54.470121577Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2024-07-03T14:26:54.474787624Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.661798ms grafana | logger=migrator t=2024-07-03T14:26:54.480418862Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2024-07-03T14:26:54.481394442Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=967.989µs grafana | logger=migrator t=2024-07-03T14:26:54.486627251Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2024-07-03T14:26:54.487751415Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.124054ms grafana | logger=migrator t=2024-07-03T14:26:54.493061156Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2024-07-03T14:26:54.494125608Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.064672ms grafana | logger=migrator t=2024-07-03T14:26:54.502567034Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2024-07-03T14:26:54.504337192Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.771207ms grafana | logger=migrator t=2024-07-03T14:26:54.50957229Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2024-07-03T14:26:54.510612862Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.040812ms grafana | logger=migrator t=2024-07-03T14:26:54.513700887Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2024-07-03T14:26:54.514756818Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.055582ms grafana | logger=migrator t=2024-07-03T14:26:54.518324114Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2024-07-03T14:26:54.519529178Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.205004ms grafana | logger=migrator t=2024-07-03T14:26:54.525398481Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2024-07-03T14:26:54.527097887Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.700026ms grafana | logger=migrator t=2024-07-03T14:26:54.531164771Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2024-07-03T14:26:54.532217494Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=1.051963ms grafana | logger=migrator t=2024-07-03T14:26:54.539204139Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2024-07-03T14:26:54.539509425Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=305.636µs grafana | logger=migrator t=2024-07-03T14:26:54.545816557Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2024-07-03T14:26:54.546684466Z level=info msg="Migration successfully executed" id="create tag table" duration=867.689µs grafana | logger=migrator t=2024-07-03T14:26:54.554668282Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2024-07-03T14:26:54.555809246Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.140784ms grafana | logger=migrator t=2024-07-03T14:26:54.561840112Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2024-07-03T14:26:54.564572319Z level=info msg="Migration successfully executed" id="create login attempt table" duration=2.735227ms grafana | logger=migrator t=2024-07-03T14:26:54.567854617Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2024-07-03T14:26:54.569187305Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.333118ms grafana | logger=migrator t=2024-07-03T14:26:54.572180608Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2024-07-03T14:26:54.573468314Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.287156ms grafana | logger=migrator t=2024-07-03T14:26:54.579445149Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2024-07-03T14:26:54.593912992Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.468093ms grafana | logger=migrator t=2024-07-03T14:26:54.596767972Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2024-07-03T14:26:54.597515007Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=746.945µs grafana | logger=migrator t=2024-07-03T14:26:54.604547553Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2024-07-03T14:26:54.605680527Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.131844ms grafana | logger=migrator t=2024-07-03T14:26:54.614700535Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2024-07-03T14:26:54.615484841Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=792.556µs grafana | logger=migrator t=2024-07-03T14:26:54.619338682Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2024-07-03T14:26:54.620118148Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=779.166µs grafana | logger=migrator t=2024-07-03T14:26:54.623986299Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2024-07-03T14:26:54.624992001Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.005672ms grafana | logger=migrator t=2024-07-03T14:26:54.632144979Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2024-07-03T14:26:54.633149821Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.003852ms grafana | logger=migrator t=2024-07-03T14:26:54.6359944Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2024-07-03T14:26:54.636136263Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=142.563µs grafana | logger=migrator t=2024-07-03T14:26:54.643349173Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2024-07-03T14:26:54.650854531Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.508797ms grafana | logger=migrator t=2024-07-03T14:26:54.656163901Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2024-07-03T14:26:54.661401441Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.23676ms grafana | logger=migrator t=2024-07-03T14:26:54.667380615Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2024-07-03T14:26:54.672437502Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.056306ms grafana | logger=migrator t=2024-07-03T14:26:54.684402221Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2024-07-03T14:26:54.692456349Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=8.050798ms grafana | logger=migrator t=2024-07-03T14:26:54.696780699Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2024-07-03T14:26:54.697659028Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=875.599µs grafana | logger=migrator t=2024-07-03T14:26:54.702405896Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2024-07-03T14:26:54.712781603Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=10.375677ms grafana | logger=migrator t=2024-07-03T14:26:54.718114165Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2024-07-03T14:26:54.718776218Z level=info msg="Migration successfully executed" id="create server_lock table" duration=662.524µs grafana | logger=migrator t=2024-07-03T14:26:54.72648577Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2024-07-03T14:26:54.728738977Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=2.255687ms grafana | logger=migrator t=2024-07-03T14:26:54.732050876Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2024-07-03T14:26:54.733873604Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.823178ms grafana | logger=migrator t=2024-07-03T14:26:54.73943922Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2024-07-03T14:26:54.741067414Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.623653ms grafana | logger=migrator t=2024-07-03T14:26:54.743721989Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2024-07-03T14:26:54.745427025Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.705126ms grafana | logger=migrator t=2024-07-03T14:26:54.747976708Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2024-07-03T14:26:54.748972539Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=996.471µs grafana | logger=migrator t=2024-07-03T14:26:54.754045815Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2024-07-03T14:26:54.761051051Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=7.003416ms grafana | logger=migrator t=2024-07-03T14:26:54.763842739Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2024-07-03T14:26:54.765147697Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.304368ms grafana | logger=migrator t=2024-07-03T14:26:54.768020627Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2024-07-03T14:26:54.7691684Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.147523ms grafana | logger=migrator t=2024-07-03T14:26:54.773580933Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2024-07-03T14:26:54.774482572Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=900.798µs grafana | logger=migrator t=2024-07-03T14:26:54.777201918Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2024-07-03T14:26:54.77823492Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.032802ms grafana | logger=migrator t=2024-07-03T14:26:54.781033638Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2024-07-03T14:26:54.781999169Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=964.891µs grafana | logger=migrator t=2024-07-03T14:26:54.785782777Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2024-07-03T14:26:54.785851639Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=69.512µs grafana | logger=migrator t=2024-07-03T14:26:54.787909452Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2024-07-03T14:26:54.788004914Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=92.522µs grafana | logger=migrator t=2024-07-03T14:26:54.79257261Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2024-07-03T14:26:54.793475648Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=902.528µs grafana | logger=migrator t=2024-07-03T14:26:54.802604619Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-07-03T14:26:54.807393969Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=4.7878ms grafana | logger=migrator t=2024-07-03T14:26:54.810963273Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-07-03T14:26:54.812267781Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.304168ms grafana | logger=migrator t=2024-07-03T14:26:54.815012678Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2024-07-03T14:26:54.815071909Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=59.911µs grafana | logger=migrator t=2024-07-03T14:26:54.818498081Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-07-03T14:26:54.819219005Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=720.154µs grafana | logger=migrator t=2024-07-03T14:26:54.823344132Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-07-03T14:26:54.823990105Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=645.883µs grafana | logger=migrator t=2024-07-03T14:26:54.826621431Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-07-03T14:26:54.827308645Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=686.935µs grafana | logger=migrator t=2024-07-03T14:26:54.830680225Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-07-03T14:26:54.83141219Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=729.575µs grafana | logger=migrator t=2024-07-03T14:26:54.836072978Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2024-07-03T14:26:54.84193127Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.857713ms grafana | logger=migrator t=2024-07-03T14:26:54.846517286Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2024-07-03T14:26:54.847408644Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=888.618µs grafana | logger=migrator t=2024-07-03T14:26:54.851346556Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2024-07-03T14:26:54.851428649Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=82.393µs grafana | logger=migrator t=2024-07-03T14:26:54.85437893Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2024-07-03T14:26:54.855299009Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=919.109µs grafana | logger=migrator t=2024-07-03T14:26:54.85819956Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2024-07-03T14:26:54.859138319Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=938.679µs grafana | logger=migrator t=2024-07-03T14:26:54.862816876Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2024-07-03T14:26:54.863753656Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=936.699µs grafana | logger=migrator t=2024-07-03T14:26:54.866677876Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2024-07-03T14:26:54.866792569Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=115.963µs grafana | logger=migrator t=2024-07-03T14:26:54.870258591Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2024-07-03T14:26:54.871824394Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.564693ms grafana | logger=migrator t=2024-07-03T14:26:54.876054162Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2024-07-03T14:26:54.877102864Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.048392ms grafana | logger=migrator t=2024-07-03T14:26:54.883727253Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2024-07-03T14:26:54.885384148Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.656415ms grafana | logger=migrator t=2024-07-03T14:26:54.891187509Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2024-07-03T14:26:54.892512666Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.300737ms grafana | logger=migrator t=2024-07-03T14:26:54.89561404Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2024-07-03T14:26:54.90323141Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=7.6174ms grafana | logger=migrator t=2024-07-03T14:26:54.906881146Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2024-07-03T14:26:54.907762495Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=880.489µs grafana | logger=migrator t=2024-07-03T14:26:54.912512493Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2024-07-03T14:26:54.913399282Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=886.849µs grafana | logger=migrator t=2024-07-03T14:26:54.917952137Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2024-07-03T14:26:54.949965776Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=32.006668ms grafana | logger=migrator t=2024-07-03T14:26:55.014490872Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2024-07-03T14:26:55.046893709Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=32.418307ms grafana | logger=migrator t=2024-07-03T14:26:55.052997721Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2024-07-03T14:26:55.054307252Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.310201ms grafana | logger=migrator t=2024-07-03T14:26:55.061435738Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2024-07-03T14:26:55.062954334Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.518836ms grafana | logger=migrator t=2024-07-03T14:26:55.066138538Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2024-07-03T14:26:55.072252571Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.114693ms grafana | logger=migrator t=2024-07-03T14:26:55.076970872Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2024-07-03T14:26:55.082937921Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.966949ms grafana | logger=migrator t=2024-07-03T14:26:55.085835039Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2024-07-03T14:26:55.086541235Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=706.426µs grafana | logger=migrator t=2024-07-03T14:26:55.089360241Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2024-07-03T14:26:55.090052558Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=691.197µs grafana | logger=migrator t=2024-07-03T14:26:55.097497111Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2024-07-03T14:26:55.098558326Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.065065ms grafana | logger=migrator t=2024-07-03T14:26:55.104559126Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2024-07-03T14:26:55.105644631Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.085095ms grafana | logger=migrator t=2024-07-03T14:26:55.109569344Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2024-07-03T14:26:55.109654336Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=79.611µs grafana | logger=migrator t=2024-07-03T14:26:55.112176465Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2024-07-03T14:26:55.118744318Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.571944ms grafana | logger=migrator t=2024-07-03T14:26:55.155487286Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2024-07-03T14:26:55.16420395Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=8.719024ms grafana | logger=migrator t=2024-07-03T14:26:55.168631694Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2024-07-03T14:26:55.180350557Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=11.715364ms grafana | logger=migrator t=2024-07-03T14:26:55.184644387Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2024-07-03T14:26:55.185640681Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=997.274µs grafana | logger=migrator t=2024-07-03T14:26:55.189193184Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2024-07-03T14:26:55.190208877Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.014813ms grafana | logger=migrator t=2024-07-03T14:26:55.199979616Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2024-07-03T14:26:55.205661468Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=5.678942ms grafana | logger=migrator t=2024-07-03T14:26:55.208910634Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2024-07-03T14:26:55.216686706Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=7.770832ms grafana | logger=migrator t=2024-07-03T14:26:55.223326341Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2024-07-03T14:26:55.225116043Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.792672ms grafana | logger=migrator t=2024-07-03T14:26:55.229923376Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2024-07-03T14:26:55.236295864Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.373118ms grafana | logger=migrator t=2024-07-03T14:26:55.239804576Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2024-07-03T14:26:55.244096387Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.290831ms grafana | logger=migrator t=2024-07-03T14:26:55.248146031Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2024-07-03T14:26:55.248229973Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=87.122µs grafana | logger=migrator t=2024-07-03T14:26:55.251275174Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2024-07-03T14:26:55.252395191Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.125387ms grafana | logger=migrator t=2024-07-03T14:26:55.25791378Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2024-07-03T14:26:55.259209389Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.29988ms grafana | logger=migrator t=2024-07-03T14:26:55.26647982Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2024-07-03T14:26:55.267631126Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.152716ms grafana | logger=migrator t=2024-07-03T14:26:55.271962188Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2024-07-03T14:26:55.272034589Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=73.131µs grafana | logger=migrator t=2024-07-03T14:26:55.274849625Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2024-07-03T14:26:55.281580982Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.732127ms grafana | logger=migrator t=2024-07-03T14:26:55.284632863Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2024-07-03T14:26:55.291779651Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=7.146218ms grafana | logger=migrator t=2024-07-03T14:26:55.29516814Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2024-07-03T14:26:55.302084851Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.916311ms grafana | logger=migrator t=2024-07-03T14:26:55.309210388Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2024-07-03T14:26:55.315445843Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.234885ms grafana | logger=migrator t=2024-07-03T14:26:55.319189921Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2024-07-03T14:26:55.326101792Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.911231ms grafana | logger=migrator t=2024-07-03T14:26:55.329831429Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2024-07-03T14:26:55.329987503Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=130.873µs grafana | logger=migrator t=2024-07-03T14:26:55.333181737Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2024-07-03T14:26:55.33414687Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=966.303µs grafana | logger=migrator t=2024-07-03T14:26:55.338557163Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2024-07-03T14:26:55.344849631Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.291038ms grafana | logger=migrator t=2024-07-03T14:26:55.34998201Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2024-07-03T14:26:55.350109634Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=127.384µs grafana | logger=migrator t=2024-07-03T14:26:55.352801157Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2024-07-03T14:26:55.359449912Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.648285ms grafana | logger=migrator t=2024-07-03T14:26:55.364854249Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2024-07-03T14:26:55.365924414Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.069774ms grafana | logger=migrator t=2024-07-03T14:26:55.369233031Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2024-07-03T14:26:55.375762094Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.528493ms grafana | logger=migrator t=2024-07-03T14:26:55.380701659Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2024-07-03T14:26:55.381608551Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=905.692µs grafana | logger=migrator t=2024-07-03T14:26:55.38845277Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2024-07-03T14:26:55.389650849Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.197469ms grafana | logger=migrator t=2024-07-03T14:26:55.397006611Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2024-07-03T14:26:55.403859581Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.858251ms grafana | logger=migrator t=2024-07-03T14:26:55.406686707Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2024-07-03T14:26:55.407277931Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=591.064µs grafana | logger=migrator t=2024-07-03T14:26:55.410807163Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2024-07-03T14:26:55.411824938Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.017535ms grafana | logger=migrator t=2024-07-03T14:26:55.415732039Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2024-07-03T14:26:55.416551638Z level=info msg="Migration successfully executed" id="create alert_image table" duration=819.519µs grafana | logger=migrator t=2024-07-03T14:26:55.420216294Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2024-07-03T14:26:55.421447102Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.229938ms grafana | logger=migrator t=2024-07-03T14:26:55.426592063Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2024-07-03T14:26:55.426664175Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=73.272µs grafana | logger=migrator t=2024-07-03T14:26:55.434363345Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2024-07-03T14:26:55.435339637Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=975.582µs grafana | logger=migrator t=2024-07-03T14:26:55.438747607Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2024-07-03T14:26:55.439684439Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=936.362µs grafana | logger=migrator t=2024-07-03T14:26:55.446391486Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2024-07-03T14:26:55.447125093Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2024-07-03T14:26:55.452250253Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2024-07-03T14:26:55.453021101Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=770.608µs grafana | logger=migrator t=2024-07-03T14:26:55.456859991Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2024-07-03T14:26:55.457893165Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.033404ms grafana | logger=migrator t=2024-07-03T14:26:55.461417997Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2024-07-03T14:26:55.470611433Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=9.193936ms grafana | logger=migrator t=2024-07-03T14:26:55.475509647Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2024-07-03T14:26:55.476304066Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=795.159µs grafana | logger=migrator t=2024-07-03T14:26:55.479252245Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2024-07-03T14:26:55.480070444Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=817.889µs grafana | logger=migrator t=2024-07-03T14:26:55.48289333Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2024-07-03T14:26:55.483718539Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=824.899µs grafana | logger=migrator t=2024-07-03T14:26:55.48802924Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2024-07-03T14:26:55.489106695Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.076675ms grafana | logger=migrator t=2024-07-03T14:26:55.491950052Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2024-07-03T14:26:55.493011557Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.061065ms grafana | logger=migrator t=2024-07-03T14:26:55.496597791Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2024-07-03T14:26:55.496627351Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=30.92µs grafana | logger=migrator t=2024-07-03T14:26:55.500283337Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2024-07-03T14:26:55.500387539Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=75.062µs grafana | logger=migrator t=2024-07-03T14:26:55.508468898Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2024-07-03T14:26:55.518512823Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=10.045585ms grafana | logger=migrator t=2024-07-03T14:26:55.522143528Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2024-07-03T14:26:55.522509787Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=365.439µs grafana | logger=migrator t=2024-07-03T14:26:55.526313205Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2024-07-03T14:26:55.527386161Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.072146ms grafana | logger=migrator t=2024-07-03T14:26:55.531235101Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2024-07-03T14:26:55.531511037Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=275.986µs grafana | logger=migrator t=2024-07-03T14:26:55.534761073Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2024-07-03T14:26:55.53589044Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.131497ms grafana | logger=migrator t=2024-07-03T14:26:55.539022173Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2024-07-03T14:26:55.540120228Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.097515ms grafana | logger=migrator t=2024-07-03T14:26:55.548917545Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2024-07-03T14:26:55.58591159Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=36.991725ms grafana | logger=migrator t=2024-07-03T14:26:55.589132526Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2024-07-03T14:26:55.594672775Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.538999ms grafana | logger=migrator t=2024-07-03T14:26:55.597998623Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2024-07-03T14:26:55.598146466Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=141.554µs grafana | logger=migrator t=2024-07-03T14:26:55.601864733Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2024-07-03T14:26:55.634413545Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=32.549121ms grafana | logger=migrator t=2024-07-03T14:26:55.638360497Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2024-07-03T14:26:55.668552393Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=30.192117ms grafana | logger=migrator t=2024-07-03T14:26:55.673156501Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2024-07-03T14:26:55.673940589Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=783.818µs grafana | logger=migrator t=2024-07-03T14:26:55.681943006Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2024-07-03T14:26:55.683885032Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.940276ms grafana | logger=migrator t=2024-07-03T14:26:55.687533847Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2024-07-03T14:26:55.687833824Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=293.517µs grafana | logger=migrator t=2024-07-03T14:26:55.69364677Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2024-07-03T14:26:55.694471919Z level=info msg="Migration successfully executed" id="create permission table" duration=825.409µs grafana | logger=migrator t=2024-07-03T14:26:55.700514901Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2024-07-03T14:26:55.702353234Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.835773ms grafana | logger=migrator t=2024-07-03T14:26:55.706244835Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2024-07-03T14:26:55.708325813Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=2.080738ms grafana | logger=migrator t=2024-07-03T14:26:55.713868673Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2024-07-03T14:26:55.715038211Z level=info msg="Migration successfully executed" id="create role table" duration=1.160898ms grafana | logger=migrator t=2024-07-03T14:26:55.719769211Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2024-07-03T14:26:55.728284151Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.51414ms grafana | logger=migrator t=2024-07-03T14:26:55.73595591Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2024-07-03T14:26:55.746058656Z level=info msg="Migration successfully executed" id="add column group_name" duration=10.102076ms grafana | logger=migrator t=2024-07-03T14:26:55.750629074Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2024-07-03T14:26:55.751462573Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=832.839µs grafana | logger=migrator t=2024-07-03T14:26:55.755817574Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2024-07-03T14:26:55.757072454Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.25615ms grafana | logger=migrator t=2024-07-03T14:26:55.761772663Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2024-07-03T14:26:55.763706099Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.932386ms grafana | logger=migrator t=2024-07-03T14:26:55.772822842Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2024-07-03T14:26:55.773797745Z level=info msg="Migration successfully executed" id="create team role table" duration=974.593µs grafana | logger=migrator t=2024-07-03T14:26:55.781815053Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2024-07-03T14:26:55.783075332Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.259349ms grafana | logger=migrator t=2024-07-03T14:26:55.786744918Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2024-07-03T14:26:55.787894225Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.148107ms grafana | logger=migrator t=2024-07-03T14:26:55.791455678Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2024-07-03T14:26:55.792614066Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.158308ms grafana | logger=migrator t=2024-07-03T14:26:55.796140838Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2024-07-03T14:26:55.797056079Z level=info msg="Migration successfully executed" id="create user role table" duration=914.901µs grafana | logger=migrator t=2024-07-03T14:26:55.802058326Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2024-07-03T14:26:55.803219933Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.160887ms grafana | logger=migrator t=2024-07-03T14:26:55.806560361Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2024-07-03T14:26:55.807841532Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.28506ms grafana | logger=migrator t=2024-07-03T14:26:55.811075247Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2024-07-03T14:26:55.812271326Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.195709ms grafana | logger=migrator t=2024-07-03T14:26:55.817231411Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2024-07-03T14:26:55.818170774Z level=info msg="Migration successfully executed" id="create builtin role table" duration=935.633µs grafana | logger=migrator t=2024-07-03T14:26:55.827022141Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2024-07-03T14:26:55.82827519Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.253599ms grafana | logger=migrator t=2024-07-03T14:26:55.831361402Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2024-07-03T14:26:55.832415097Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.053615ms grafana | logger=migrator t=2024-07-03T14:26:55.837984207Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2024-07-03T14:26:55.84708836Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.102163ms grafana | logger=migrator t=2024-07-03T14:26:55.850383997Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2024-07-03T14:26:55.851493143Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.108656ms grafana | logger=migrator t=2024-07-03T14:26:55.854926553Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2024-07-03T14:26:55.857084394Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=2.155871ms grafana | logger=migrator t=2024-07-03T14:26:55.862416758Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2024-07-03T14:26:55.86419075Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.772612ms grafana | logger=migrator t=2024-07-03T14:26:55.868847829Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2024-07-03T14:26:55.870518597Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.670698ms grafana | logger=migrator t=2024-07-03T14:26:55.874834209Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2024-07-03T14:26:55.875642118Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=808.099µs grafana | logger=migrator t=2024-07-03T14:26:55.87871918Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2024-07-03T14:26:55.879775775Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.055255ms grafana | logger=migrator t=2024-07-03T14:26:55.882606651Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2024-07-03T14:26:55.890911765Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.304504ms grafana | logger=migrator t=2024-07-03T14:26:55.896167498Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2024-07-03T14:26:55.904324398Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.157001ms grafana | logger=migrator t=2024-07-03T14:26:55.910019442Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2024-07-03T14:26:55.917276472Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=7.25545ms grafana | logger=migrator t=2024-07-03T14:26:55.919817241Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2024-07-03T14:26:55.927924861Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.10621ms grafana | logger=migrator t=2024-07-03T14:26:55.932794995Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2024-07-03T14:26:55.933585543Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=790.658µs grafana | logger=migrator t=2024-07-03T14:26:55.93819025Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2024-07-03T14:26:55.939735647Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.543167ms grafana | logger=migrator t=2024-07-03T14:26:55.94586263Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2024-07-03T14:26:55.9475307Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.66804ms grafana | logger=migrator t=2024-07-03T14:26:55.95521445Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2024-07-03T14:26:55.956782736Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.567436ms grafana | logger=migrator t=2024-07-03T14:26:55.960027952Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2024-07-03T14:26:55.961712901Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.684329ms grafana | logger=migrator t=2024-07-03T14:26:55.964752172Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2024-07-03T14:26:55.964817264Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=65.142µs grafana | logger=migrator t=2024-07-03T14:26:55.969743149Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2024-07-03T14:26:55.969801001Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=58.542µs grafana | logger=migrator t=2024-07-03T14:26:55.973829405Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2024-07-03T14:26:55.974464339Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=634.264µs grafana | logger=migrator t=2024-07-03T14:26:55.977808368Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2024-07-03T14:26:55.978661918Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=858.25µs grafana | logger=migrator t=2024-07-03T14:26:55.981992725Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2024-07-03T14:26:55.982576349Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=583.484µs grafana | logger=migrator t=2024-07-03T14:26:55.989213825Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2024-07-03T14:26:55.989481491Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=266.926µs grafana | logger=migrator t=2024-07-03T14:26:55.999896774Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2024-07-03T14:26:56.000736534Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=839.3µs grafana | logger=migrator t=2024-07-03T14:26:56.003881568Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2024-07-03T14:26:56.00529026Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.409672ms grafana | logger=migrator t=2024-07-03T14:26:56.008099186Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2024-07-03T14:26:56.009272254Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.172477ms grafana | logger=migrator t=2024-07-03T14:26:56.015942979Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2024-07-03T14:26:56.027802855Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=11.860196ms grafana | logger=migrator t=2024-07-03T14:26:56.035846463Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2024-07-03T14:26:56.035913305Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=67.322µs grafana | logger=migrator t=2024-07-03T14:26:56.042206033Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2024-07-03T14:26:56.043754378Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.547365ms grafana | logger=migrator t=2024-07-03T14:26:56.049178955Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2024-07-03T14:26:56.050929156Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.748441ms grafana | logger=migrator t=2024-07-03T14:26:56.055948523Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2024-07-03T14:26:56.057044239Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.091556ms grafana | logger=migrator t=2024-07-03T14:26:56.061488862Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2024-07-03T14:26:56.072137301Z level=info msg="Migration successfully executed" id="add correlation config column" duration=10.648409ms grafana | logger=migrator t=2024-07-03T14:26:56.077269711Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2024-07-03T14:26:56.078051469Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=781.158µs grafana | logger=migrator t=2024-07-03T14:26:56.080529217Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2024-07-03T14:26:56.081614742Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.084946ms grafana | logger=migrator t=2024-07-03T14:26:56.088620126Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2024-07-03T14:26:56.11276512Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=24.150925ms grafana | logger=migrator t=2024-07-03T14:26:56.116611709Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2024-07-03T14:26:56.11752544Z level=info msg="Migration successfully executed" id="create correlation v2" duration=913.641µs grafana | logger=migrator t=2024-07-03T14:26:56.122038566Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2024-07-03T14:26:56.123297326Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.25684ms grafana | logger=migrator t=2024-07-03T14:26:56.128702091Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2024-07-03T14:26:56.129809558Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.107357ms grafana | logger=migrator t=2024-07-03T14:26:56.132536502Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2024-07-03T14:26:56.133647847Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.110605ms grafana | logger=migrator t=2024-07-03T14:26:56.137321604Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2024-07-03T14:26:56.137558139Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=236.576µs grafana | logger=migrator t=2024-07-03T14:26:56.142287488Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2024-07-03T14:26:56.143644071Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.352322ms grafana | logger=migrator t=2024-07-03T14:26:56.148469684Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2024-07-03T14:26:56.158857916Z level=info msg="Migration successfully executed" id="add provisioning column" duration=10.388312ms grafana | logger=migrator t=2024-07-03T14:26:56.162095761Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2024-07-03T14:26:56.162893121Z level=info msg="Migration successfully executed" id="create entity_events table" duration=794.229µs grafana | logger=migrator t=2024-07-03T14:26:56.167946848Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2024-07-03T14:26:56.168946462Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=998.074µs grafana | logger=migrator t=2024-07-03T14:26:56.172532705Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-07-03T14:26:56.173096548Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-07-03T14:26:56.176098468Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-07-03T14:26:56.176665372Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-07-03T14:26:56.183364658Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2024-07-03T14:26:56.184059874Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=694.246µs grafana | logger=migrator t=2024-07-03T14:26:56.189617264Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2024-07-03T14:26:56.191054508Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.433734ms grafana | logger=migrator t=2024-07-03T14:26:56.194475518Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-07-03T14:26:56.195956672Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.481154ms grafana | logger=migrator t=2024-07-03T14:26:56.202412843Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-07-03T14:26:56.203331244Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=918.761µs grafana | logger=migrator t=2024-07-03T14:26:56.206003286Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2024-07-03T14:26:56.206833665Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=829.729µs grafana | logger=migrator t=2024-07-03T14:26:56.210472811Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2024-07-03T14:26:56.211753111Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.27808ms grafana | logger=migrator t=2024-07-03T14:26:56.218626922Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2024-07-03T14:26:56.219655515Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.033453ms grafana | logger=migrator t=2024-07-03T14:26:56.227048179Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2024-07-03T14:26:56.228249096Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.200607ms grafana | logger=migrator t=2024-07-03T14:26:56.231185474Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2024-07-03T14:26:56.232255409Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.069785ms grafana | logger=migrator t=2024-07-03T14:26:56.238327152Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2024-07-03T14:26:56.239437417Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.109715ms grafana | logger=migrator t=2024-07-03T14:26:56.243931042Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2024-07-03T14:26:56.246195505Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=2.265412ms grafana | logger=migrator t=2024-07-03T14:26:56.249851651Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2024-07-03T14:26:56.270282767Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=20.427416ms grafana | logger=migrator t=2024-07-03T14:26:56.275665973Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2024-07-03T14:26:56.284469559Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.807646ms grafana | logger=migrator t=2024-07-03T14:26:56.290380616Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2024-07-03T14:26:56.297280598Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.899342ms grafana | logger=migrator t=2024-07-03T14:26:56.300200736Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2024-07-03T14:26:56.300448512Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=248.126µs grafana | logger=migrator t=2024-07-03T14:26:56.304077946Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2024-07-03T14:26:56.312510993Z level=info msg="Migration successfully executed" id="add share column" duration=8.432367ms grafana | logger=migrator t=2024-07-03T14:26:56.316182519Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2024-07-03T14:26:56.316384883Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=202.144µs grafana | logger=migrator t=2024-07-03T14:26:56.321657397Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2024-07-03T14:26:56.322660781Z level=info msg="Migration successfully executed" id="create file table" duration=1.004854ms grafana | logger=migrator t=2024-07-03T14:26:56.329177782Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2024-07-03T14:26:56.331364314Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=2.185301ms grafana | logger=migrator t=2024-07-03T14:26:56.334613449Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2024-07-03T14:26:56.336595626Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.980997ms grafana | logger=migrator t=2024-07-03T14:26:56.339655367Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2024-07-03T14:26:56.340404725Z level=info msg="Migration successfully executed" id="create file_meta table" duration=749.168µs grafana | logger=migrator t=2024-07-03T14:26:56.34489464Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2024-07-03T14:26:56.345829571Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=934.351µs grafana | logger=migrator t=2024-07-03T14:26:56.351649167Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2024-07-03T14:26:56.351830491Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=180.384µs grafana | logger=migrator t=2024-07-03T14:26:56.355254111Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2024-07-03T14:26:56.355424785Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=169.284µs grafana | logger=migrator t=2024-07-03T14:26:56.361458037Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2024-07-03T14:26:56.362383618Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=926.892µs grafana | logger=migrator t=2024-07-03T14:26:56.365327757Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2024-07-03T14:26:56.365627693Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=299.377µs grafana | logger=migrator t=2024-07-03T14:26:56.369141675Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2024-07-03T14:26:56.373119479Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=3.975924ms grafana | logger=migrator t=2024-07-03T14:26:56.377436809Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2024-07-03T14:26:56.38905304Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=11.615451ms grafana | logger=migrator t=2024-07-03T14:26:56.393811021Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2024-07-03T14:26:56.394014926Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=203.405µs grafana | logger=migrator t=2024-07-03T14:26:56.399903594Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2024-07-03T14:26:56.400930868Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.027195ms grafana | logger=migrator t=2024-07-03T14:26:56.404534962Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2024-07-03T14:26:56.404935131Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=392.689µs grafana | logger=migrator t=2024-07-03T14:26:56.411834473Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2024-07-03T14:26:56.412132769Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=299.816µs grafana | logger=migrator t=2024-07-03T14:26:56.416941451Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2024-07-03T14:26:56.417908704Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=967.003µs grafana | logger=migrator t=2024-07-03T14:26:56.422059161Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2024-07-03T14:26:56.432103116Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=10.043275ms grafana | logger=migrator t=2024-07-03T14:26:56.435382842Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2024-07-03T14:26:56.443180934Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.797592ms grafana | logger=migrator t=2024-07-03T14:26:56.450461284Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2024-07-03T14:26:56.451622791Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.161377ms grafana | logger=migrator t=2024-07-03T14:26:56.458910101Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2024-07-03T14:26:56.535851488Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=76.941477ms grafana | logger=migrator t=2024-07-03T14:26:56.539143735Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2024-07-03T14:26:56.540071226Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=926.741µs grafana | logger=migrator t=2024-07-03T14:26:56.544179903Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2024-07-03T14:26:56.545650926Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.469864ms grafana | logger=migrator t=2024-07-03T14:26:56.550045509Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2024-07-03T14:26:56.576606029Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=26.55122ms grafana | logger=migrator t=2024-07-03T14:26:56.582124268Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2024-07-03T14:26:56.588509898Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.38662ms grafana | logger=migrator t=2024-07-03T14:26:56.591933768Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2024-07-03T14:26:56.592225284Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=290.986µs grafana | logger=migrator t=2024-07-03T14:26:56.596949055Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2024-07-03T14:26:56.597130249Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=180.854µs grafana | logger=migrator t=2024-07-03T14:26:56.603282592Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2024-07-03T14:26:56.603499557Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=214.675µs grafana | logger=migrator t=2024-07-03T14:26:56.611968375Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2024-07-03T14:26:56.61217417Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=206.015µs grafana | logger=migrator t=2024-07-03T14:26:56.618048407Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2024-07-03T14:26:56.618253942Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=204.285µs grafana | logger=migrator t=2024-07-03T14:26:56.622274046Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2024-07-03T14:26:56.623063074Z level=info msg="Migration successfully executed" id="create folder table" duration=788.928µs grafana | logger=migrator t=2024-07-03T14:26:56.628070991Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2024-07-03T14:26:56.629077804Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.005563ms grafana | logger=migrator t=2024-07-03T14:26:56.633829686Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2024-07-03T14:26:56.634816158Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=985.452µs grafana | logger=migrator t=2024-07-03T14:26:56.638522585Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2024-07-03T14:26:56.638545315Z level=info msg="Migration successfully executed" id="Update folder title length" duration=22.33µs grafana | logger=migrator t=2024-07-03T14:26:56.645295033Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-07-03T14:26:56.646236105Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=940.932µs grafana | logger=migrator t=2024-07-03T14:26:56.651264633Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-07-03T14:26:56.652115532Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=832.659µs grafana | logger=migrator t=2024-07-03T14:26:56.655095612Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2024-07-03T14:26:56.655978143Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=882.101µs grafana | logger=migrator t=2024-07-03T14:26:56.660049407Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2024-07-03T14:26:56.660415546Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=365.999µs grafana | logger=migrator t=2024-07-03T14:26:56.663421876Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2024-07-03T14:26:56.663655541Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=233.555µs grafana | logger=migrator t=2024-07-03T14:26:56.667656485Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2024-07-03T14:26:56.668496695Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=839.79µs grafana | logger=migrator t=2024-07-03T14:26:56.674238439Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2024-07-03T14:26:56.675836147Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.596968ms grafana | logger=migrator t=2024-07-03T14:26:56.679125413Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2024-07-03T14:26:56.68030537Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.179947ms grafana | logger=migrator t=2024-07-03T14:26:56.686717511Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2024-07-03T14:26:56.68800144Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.283809ms grafana | logger=migrator t=2024-07-03T14:26:56.692902415Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2024-07-03T14:26:56.694703047Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.799802ms grafana | logger=migrator t=2024-07-03T14:26:56.698153128Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2024-07-03T14:26:56.699945239Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.788922ms grafana | logger=migrator t=2024-07-03T14:26:56.704171618Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2024-07-03T14:26:56.70557374Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.402082ms grafana | logger=migrator t=2024-07-03T14:26:56.711259753Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2024-07-03T14:26:56.713401933Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.14142ms grafana | logger=migrator t=2024-07-03T14:26:56.719175498Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2024-07-03T14:26:56.720219973Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.043895ms grafana | logger=migrator t=2024-07-03T14:26:56.72526607Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2024-07-03T14:26:56.726880038Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.610338ms grafana | logger=migrator t=2024-07-03T14:26:56.730977303Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2024-07-03T14:26:56.732839327Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.861884ms grafana | logger=migrator t=2024-07-03T14:26:56.73721175Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2024-07-03T14:26:56.737601229Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=390.459µs grafana | logger=migrator t=2024-07-03T14:26:56.742198405Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2024-07-03T14:26:56.752149988Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.950773ms grafana | logger=migrator t=2024-07-03T14:26:56.757228697Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2024-07-03T14:26:56.757840651Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=612.384µs grafana | logger=migrator t=2024-07-03T14:26:56.764096107Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-07-03T14:26:56.764123907Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=28.6µs grafana | logger=migrator t=2024-07-03T14:26:56.76762993Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2024-07-03T14:26:56.769410871Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.779991ms grafana | logger=migrator t=2024-07-03T14:26:56.774559691Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-07-03T14:26:56.774577542Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=21.881µs grafana | logger=migrator t=2024-07-03T14:26:56.778675187Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2024-07-03T14:26:56.781181816Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.504879ms grafana | logger=migrator t=2024-07-03T14:26:56.784803861Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2024-07-03T14:26:56.786147182Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.342661ms grafana | logger=migrator t=2024-07-03T14:26:56.796610166Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2024-07-03T14:26:56.797876745Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.266049ms grafana | logger=migrator t=2024-07-03T14:26:56.802067184Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2024-07-03T14:26:56.803421935Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.355961ms grafana | logger=migrator t=2024-07-03T14:26:56.807229375Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2024-07-03T14:26:56.80787544Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=648.814µs grafana | logger=migrator t=2024-07-03T14:26:56.812759454Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2024-07-03T14:26:56.81344909Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=688.826µs grafana | logger=migrator t=2024-07-03T14:26:56.816527772Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2024-07-03T14:26:56.817670228Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.142017ms grafana | logger=migrator t=2024-07-03T14:26:56.820837842Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2024-07-03T14:26:56.822319616Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.478034ms grafana | logger=migrator t=2024-07-03T14:26:56.826436593Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2024-07-03T14:26:56.836129319Z level=info msg="Migration successfully executed" id="add stack_id column" duration=9.692146ms grafana | logger=migrator t=2024-07-03T14:26:56.844223918Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2024-07-03T14:26:56.855607734Z level=info msg="Migration successfully executed" id="add region_slug column" duration=11.378906ms grafana | logger=migrator t=2024-07-03T14:26:56.859619377Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2024-07-03T14:26:56.868011653Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=8.388776ms grafana | logger=migrator t=2024-07-03T14:26:56.871400553Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2024-07-03T14:26:56.87811917Z level=info msg="Migration successfully executed" id="add migration uid column" duration=6.718327ms grafana | logger=migrator t=2024-07-03T14:26:56.884012827Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2024-07-03T14:26:56.884261532Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=246.735µs grafana | logger=migrator t=2024-07-03T14:26:56.887642511Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2024-07-03T14:26:56.889854173Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=2.210852ms grafana | logger=migrator t=2024-07-03T14:26:56.8943915Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2024-07-03T14:26:56.906185865Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=11.794456ms grafana | logger=migrator t=2024-07-03T14:26:56.910342772Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2024-07-03T14:26:56.910523517Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=180.596µs grafana | logger=migrator t=2024-07-03T14:26:56.915947872Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2024-07-03T14:26:56.918179964Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=2.231492ms grafana | logger=migrator t=2024-07-03T14:26:56.923243383Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2024-07-03T14:26:56.923345145Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=101.392µs grafana | logger=migrator t=2024-07-03T14:26:56.927514153Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2024-07-03T14:26:56.937635939Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=10.088365ms grafana | logger=migrator t=2024-07-03T14:26:56.943825433Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2024-07-03T14:26:56.953707604Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.881201ms grafana | logger=migrator t=2024-07-03T14:26:56.963487842Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2024-07-03T14:26:56.963942184Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=452.382µs grafana | logger=migrator t=2024-07-03T14:26:56.969033322Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2024-07-03T14:26:56.969547975Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=518.023µs grafana | logger=migrator t=2024-07-03T14:26:56.973756453Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2024-07-03T14:26:56.986232324Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=12.475962ms grafana | logger=migrator t=2024-07-03T14:26:56.990255208Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2024-07-03T14:26:57.001559482Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=11.302034ms grafana | logger=migrator t=2024-07-03T14:26:57.007071311Z level=info msg="migrations completed" performed=572 skipped=0 duration=4.781336875s grafana | logger=migrator t=2024-07-03T14:26:57.007656185Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2024-07-03T14:26:57.023060098Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2024-07-03T14:26:57.023326704Z level=info msg="Created default organization" grafana | logger=secrets t=2024-07-03T14:26:57.029497588Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-07-03T14:26:57.084643564Z level=info msg="Restored cache from database" duration=603.355µs grafana | logger=plugin.store t=2024-07-03T14:26:57.088277689Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2024-07-03T14:26:57.123788383Z level=error msg="Could not register plugin" pluginId=xychart error="plugin xychart is already registered" grafana | logger=plugins.initialization t=2024-07-03T14:26:57.123872445Z level=error msg="Could not initialize plugin" pluginId=xychart error="plugin xychart is already registered" grafana | logger=local.finder t=2024-07-03T14:26:57.123975448Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled grafana | logger=plugin.store t=2024-07-03T14:26:57.124015829Z level=info msg="Plugins loaded" count=54 duration=35.74081ms grafana | logger=query_data t=2024-07-03T14:26:57.128071674Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2024-07-03T14:26:57.131472323Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2024-07-03T14:26:57.139138463Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert.state.manager t=2024-07-03T14:26:57.147945701Z level=info msg="Running in alternative execution of Error/NoData mode" grafana | logger=infra.usagestats.collector t=2024-07-03T14:26:57.151356211Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=provisioning.datasources t=2024-07-03T14:26:57.154631948Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=provisioning.alerting t=2024-07-03T14:26:57.179422051Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2024-07-03T14:26:57.179447711Z level=info msg="finished to provision alerting" grafana | logger=grafanaStorageLogger t=2024-07-03T14:26:57.179582234Z level=info msg="Storage starting" grafana | logger=ngalert.state.manager t=2024-07-03T14:26:57.180181268Z level=info msg="Warming state cache for startup" grafana | logger=provisioning.dashboard t=2024-07-03T14:26:57.180591688Z level=info msg="starting to provision dashboards" grafana | logger=http.server t=2024-07-03T14:26:57.1823796Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=ngalert.multiorg.alertmanager t=2024-07-03T14:26:57.197139637Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=ngalert.state.manager t=2024-07-03T14:26:57.23344413Z level=info msg="State cache has been initialized" states=0 duration=53.256872ms grafana | logger=ngalert.scheduler t=2024-07-03T14:26:57.233564333Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 grafana | logger=ticker t=2024-07-03T14:26:57.233677665Z level=info msg=starting first_tick=2024-07-03T14:27:00Z grafana | logger=plugins.update.checker t=2024-07-03T14:26:57.267322916Z level=info msg="Update check succeeded" duration=87.344783ms grafana | logger=grafana.update.checker t=2024-07-03T14:26:57.271878503Z level=info msg="Update check succeeded" duration=92.217427ms grafana | logger=sqlstore.transactions t=2024-07-03T14:26:57.295243931Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=sqlstore.transactions t=2024-07-03T14:26:57.323950736Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-07-03T14:26:57.35776542Z level=info msg="Patterns update finished" duration=109.073433ms grafana | logger=provisioning.dashboard t=2024-07-03T14:26:57.590733544Z level=info msg="finished to provision dashboards" grafana | logger=grafana-apiserver t=2024-07-03T14:26:57.617124364Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2024-07-03T14:26:57.617750199Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=infra.usagestats t=2024-07-03T14:28:41.196433604Z level=info msg="Usage stats are ready to report" =================================== ======== Logs from kafka ======== kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-07-03 14:26:57,304] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,305] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,305] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,305] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,305] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,305] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,305] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,305] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,305] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,305] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,305] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,305] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,305] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,305] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,305] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,306] INFO Client environment:os.memory.free=494MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,306] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,306] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,308] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@61d47554 (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,312] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-07-03 14:26:57,316] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-07-03 14:26:57,323] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-03 14:26:57,338] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-03 14:26:57,338] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-03 14:26:57,344] INFO Socket connection established, initiating session, client: /172.17.0.8:45326, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-03 14:26:57,378] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000029b7990000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-03 14:26:57,508] INFO Session: 0x1000029b7990000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:57,508] INFO EventThread shut down for session: 0x1000029b7990000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-07-03 14:26:58,197] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-07-03 14:26:58,512] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-07-03 14:26:58,580] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-07-03 14:26:58,581] INFO starting (kafka.server.KafkaServer) kafka | [2024-07-03 14:26:58,581] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-07-03 14:26:58,593] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-07-03 14:26:58,596] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,596] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,597] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,597] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,597] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,597] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,597] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,597] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,597] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,597] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,597] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,597] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,597] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,597] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,597] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,598] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,598] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,598] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,599] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@66746f57 (org.apache.zookeeper.ZooKeeper) kafka | [2024-07-03 14:26:58,603] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-07-03 14:26:58,608] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-03 14:26:58,609] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-07-03 14:26:58,613] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-03 14:26:58,618] INFO Socket connection established, initiating session, client: /172.17.0.8:45328, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-03 14:26:58,628] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000029b7990001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-07-03 14:26:58,632] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-07-03 14:26:58,911] INFO Cluster ID = oH0-pHXbT_qkvD-L4l6e0Q (kafka.server.KafkaServer) kafka | [2024-07-03 14:26:58,914] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-07-03 14:26:58,965] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.6-IV2 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.local.retention.bytes = -2 kafka | log.local.retention.ms = -2 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = rsm.config. kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | server.max.startup.time.ms = 9223372036854775807 kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.partition.verification.enable = true kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | unstable.api.versions.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.metadata.migration.min.batch.size = 200 kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2024-07-03 14:26:58,992] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-07-03 14:26:58,992] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-07-03 14:26:58,994] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-07-03 14:26:58,995] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-07-03 14:26:59,020] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2024-07-03 14:26:59,024] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) kafka | [2024-07-03 14:26:59,033] INFO Loaded 0 logs in 12ms (kafka.log.LogManager) kafka | [2024-07-03 14:26:59,034] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2024-07-03 14:26:59,035] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2024-07-03 14:26:59,046] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2024-07-03 14:26:59,090] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) kafka | [2024-07-03 14:26:59,105] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2024-07-03 14:26:59,118] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-07-03 14:26:59,161] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-07-03 14:26:59,477] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-07-03 14:26:59,496] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2024-07-03 14:26:59,497] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-07-03 14:26:59,502] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2024-07-03 14:26:59,506] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-07-03 14:26:59,529] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-03 14:26:59,530] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-03 14:26:59,532] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-03 14:26:59,533] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-03 14:26:59,535] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-03 14:26:59,546] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2024-07-03 14:26:59,547] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) kafka | [2024-07-03 14:26:59,568] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2024-07-03 14:26:59,592] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1720016819584,1720016819584,1,0,0,72057773211844609,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2024-07-03 14:26:59,593] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2024-07-03 14:26:59,653] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2024-07-03 14:26:59,659] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-03 14:26:59,665] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-03 14:26:59,666] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-03 14:26:59,672] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2024-07-03 14:26:59,680] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:26:59,681] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,684] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:26:59,685] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,691] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-07-03 14:26:59,708] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-07-03 14:26:59,714] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2024-07-03 14:26:59,714] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,716] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2024-07-03 14:26:59,716] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-07-03 14:26:59,719] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,723] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,725] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,741] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,745] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,750] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2024-07-03 14:26:59,756] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2024-07-03 14:26:59,756] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-07-03 14:26:59,757] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,758] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,758] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,758] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,760] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,761] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,761] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,762] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2024-07-03 14:26:59,763] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,765] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2024-07-03 14:26:59,772] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2024-07-03 14:26:59,773] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-07-03 14:26:59,776] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-07-03 14:26:59,776] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2024-07-03 14:26:59,777] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2024-07-03 14:26:59,778] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2024-07-03 14:26:59,780] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2024-07-03 14:26:59,780] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,780] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) kafka | [2024-07-03 14:26:59,782] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.8:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) kafka | [2024-07-03 14:26:59,784] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) kafka | [2024-07-03 14:26:59,789] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) kafka | [2024-07-03 14:26:59,785] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,789] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2024-07-03 14:26:59,790] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,790] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,791] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,793] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,804] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2024-07-03 14:26:59,805] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2024-07-03 14:26:59,808] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2024-07-03 14:26:59,810] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2024-07-03 14:26:59,819] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-07-03 14:26:59,819] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-07-03 14:26:59,819] INFO Kafka startTimeMs: 1720016819814 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-07-03 14:26:59,820] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2024-07-03 14:26:59,891] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2024-07-03 14:26:59,949] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-07-03 14:26:59,973] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2024-07-03 14:27:00,008] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2024-07-03 14:27:04,806] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2024-07-03 14:27:04,806] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2024-07-03 14:27:23,309] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2024-07-03 14:27:23,310] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-07-03 14:27:23,312] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-07-03 14:27:23,327] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2024-07-03 14:27:23,363] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(eAOj3Kj-RK-B_kDbDL1rIg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(r0dxtfWfS62-f-_Ua7qa2A),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2024-07-03 14:27:23,367] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2024-07-03 14:27:23,371] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,371] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,372] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,372] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,372] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,372] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,372] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,372] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,373] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,373] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,373] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,373] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,373] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,373] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,374] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,374] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,374] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,374] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,374] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,374] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,375] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,375] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,375] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,375] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,375] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,377] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,379] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,380] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,380] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-07-03 14:27:23,385] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-07-03 14:27:23,391] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,391] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,391] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,391] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,391] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,392] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,392] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,392] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,392] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,392] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,392] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,392] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,392] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,395] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,395] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,395] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,395] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,396] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,396] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-07-03 14:27:23,400] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,567] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-07-03 14:27:23,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-07-03 14:27:23,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-07-03 14:27:23,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-07-03 14:27:23,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-07-03 14:27:23,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-07-03 14:27:23,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-07-03 14:27:23,571] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2024-07-03 14:27:23,574] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2024-07-03 14:27:23,575] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,575] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,575] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,575] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-07-03 14:27:23,577] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-07-03 14:27:23,583] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-07-03 14:27:23,585] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,586] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,586] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,586] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,586] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,586] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,587] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,587] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,587] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,587] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,587] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,587] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,588] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,588] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,588] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,588] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,588] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,588] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,588] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,589] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,589] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,589] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,589] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,589] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,589] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,589] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,590] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,590] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,590] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,591] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,591] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,591] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,592] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,592] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,592] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,594] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,594] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,594] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,594] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,594] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,594] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,594] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-07-03 14:27:23,642] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2024-07-03 14:27:23,642] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2024-07-03 14:27:23,706] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,722] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,727] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,729] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,731] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,755] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,757] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,757] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,757] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,758] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,766] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,768] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,768] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,768] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,768] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,776] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,776] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,777] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,777] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,777] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,784] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,785] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,785] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,785] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,785] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,793] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,794] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,794] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,794] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,794] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,804] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,805] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,806] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,806] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,806] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,814] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,815] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,815] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,815] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,815] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,822] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,822] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,822] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,822] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,823] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,829] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,832] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,832] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,832] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,832] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,843] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,844] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,844] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,844] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,844] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,850] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,851] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,851] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,851] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,851] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,860] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,860] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,860] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,861] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,861] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,868] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,868] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,868] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,868] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,869] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,875] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,876] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,876] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,876] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,876] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,885] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,886] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,886] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,886] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,886] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,895] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,896] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,896] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,896] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,896] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,903] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,904] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,905] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,905] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,905] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,914] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,915] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,915] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,915] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,915] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,930] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,931] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,931] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,931] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,931] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,939] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,940] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,940] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,940] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,940] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,945] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,945] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,945] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,946] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,946] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,953] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,954] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,954] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,954] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,954] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,961] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,962] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,962] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,962] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,962] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,970] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,971] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,971] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,971] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,972] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,977] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,978] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,978] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,978] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,978] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,984] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,985] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,985] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,985] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,985] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,991] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,991] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,991] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,991] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,991] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:23,997] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:23,997] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:23,997] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,998] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:23,998] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,003] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,004] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,005] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,005] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,005] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,011] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,012] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,012] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,012] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,012] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,020] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,022] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,022] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,023] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,023] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,031] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,032] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,033] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,033] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,033] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,043] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,044] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,044] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,044] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,045] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,055] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,056] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,056] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,056] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,056] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(eAOj3Kj-RK-B_kDbDL1rIg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,063] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,064] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,064] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,064] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,065] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,076] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,077] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,077] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,077] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,077] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,085] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,086] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,086] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,086] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,086] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,094] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,095] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,095] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,095] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,095] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,102] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,103] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,103] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,103] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,103] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,113] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,114] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,114] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,114] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,115] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,121] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,121] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,121] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,122] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,122] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,131] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,132] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,132] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,132] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,132] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,140] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,141] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,141] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,141] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,141] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,148] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,149] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,149] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,149] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,149] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,160] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,161] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,161] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,161] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,161] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,172] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,173] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,173] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,173] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,173] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,180] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,181] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,181] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,181] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,181] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,187] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,188] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,188] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,188] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,188] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,194] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,194] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,194] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,194] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,194] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,199] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-07-03 14:27:24,199] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-07-03 14:27:24,200] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,200] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-07-03 14:27:24,200] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-07-03 14:27:24,210] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,212] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,213] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,213] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,213] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,213] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,213] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,213] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,213] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,213] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,213] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,213] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,213] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,213] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,214] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,214] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,214] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,214] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,214] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,214] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,214] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,214] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,217] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,217] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,217] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,217] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,217] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,217] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,217] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,217] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,217] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,217] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,217] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,217] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,219] INFO [Broker id=1] Finished LeaderAndIsr request in 638ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-07-03 14:27:24,221] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 8 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,223] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,223] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,223] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,223] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,223] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,224] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,224] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,224] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,224] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,224] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,224] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,225] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,225] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,228] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=r0dxtfWfS62-f-_Ua7qa2A, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=eAOj3Kj-RK-B_kDbDL1rIg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-07-03 14:27:24,228] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 13 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,228] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,228] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,228] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,228] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,229] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,229] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,229] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,229] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,230] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,231] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,231] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,231] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,231] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,231] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,232] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,232] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,232] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,232] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,232] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,232] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,233] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,233] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,233] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,233] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,233] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,233] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,234] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,234] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,234] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-07-03 14:27:24,239] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,240] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,240] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,240] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,240] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,240] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,240] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,240] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,244] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,244] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,244] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,246] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-07-03 14:27:24,246] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-07-03 14:27:24,331] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-80d763bd-86c7-4846-b555-35a51ccdc9fc and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,345] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-80d763bd-86c7-4846-b555-35a51ccdc9fc with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,352] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 98eef03c-6c97-41d2-b0d1-0e3fd148d393 in Empty state. Created a new member id consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3-96ab80bf-0485-4af4-8e22-f388a6176ea5 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,355] INFO [GroupCoordinator 1]: Preparing to rebalance group 98eef03c-6c97-41d2-b0d1-0e3fd148d393 in state PreparingRebalance with old generation 0 (__consumer_offsets-18) (reason: Adding new member consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3-96ab80bf-0485-4af4-8e22-f388a6176ea5 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,582] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group d8b2b84c-6638-4843-9df5-de6a0e09886f in Empty state. Created a new member id consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2-02e1bbb6-69ac-41b2-aef0-bbd52b0c50ab and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:24,586] INFO [GroupCoordinator 1]: Preparing to rebalance group d8b2b84c-6638-4843-9df5-de6a0e09886f in state PreparingRebalance with old generation 0 (__consumer_offsets-29) (reason: Adding new member consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2-02e1bbb6-69ac-41b2-aef0-bbd52b0c50ab with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:27,358] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:27,361] INFO [GroupCoordinator 1]: Stabilized group 98eef03c-6c97-41d2-b0d1-0e3fd148d393 generation 1 (__consumer_offsets-18) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:27,383] INFO [GroupCoordinator 1]: Assignment received from leader consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3-96ab80bf-0485-4af4-8e22-f388a6176ea5 for group 98eef03c-6c97-41d2-b0d1-0e3fd148d393 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:27,384] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-80d763bd-86c7-4846-b555-35a51ccdc9fc for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:27,587] INFO [GroupCoordinator 1]: Stabilized group d8b2b84c-6638-4843-9df5-de6a0e09886f generation 1 (__consumer_offsets-29) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-07-03 14:27:27,601] INFO [GroupCoordinator 1]: Assignment received from leader consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2-02e1bbb6-69ac-41b2-aef0-bbd52b0c50ab for group d8b2b84c-6638-4843-9df5-de6a0e09886f for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) =================================== ======== Logs from mariadb ======== mariadb | 2024-07-03 14:26:47+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-07-03 14:26:47+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-07-03 14:26:47+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-07-03 14:26:47+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-07-03 14:26:47 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-07-03 14:26:47 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-07-03 14:26:47 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-07-03 14:26:48+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-07-03 14:26:48+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-07-03 14:26:48+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-07-03 14:26:49 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 99 ... mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-07-03 14:26:49 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-07-03 14:26:49 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-07-03 14:26:49 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: log sequence number 45452; transaction id 14 mariadb | 2024-07-03 14:26:49 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-07-03 14:26:49 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-07-03 14:26:49 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-07-03 14:26:49 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-07-03 14:26:49 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-07-03 14:26:50+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-07-03 14:26:51+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-07-03 14:26:51+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-07-03 14:26:51+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-07-03 14:26:51+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-07-03 14:26:52+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-07-03 14:26:52 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-07-03 14:26:52 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-07-03 14:26:52 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-07-03 14:26:52 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-07-03 14:26:52 0 [Note] InnoDB: Buffer pool(s) dump completed at 240703 14:26:52 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Shutdown completed; log sequence number 330344; transaction id 298 mariadb | 2024-07-03 14:26:53 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-07-03 14:26:53+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-07-03 14:26:53+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-07-03 14:26:53 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-07-03 14:26:53 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-07-03 14:26:53 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-07-03 14:26:53 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: log sequence number 330344; transaction id 299 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-07-03 14:26:53 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-07-03 14:26:53 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-07-03 14:26:53 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. mariadb | 2024-07-03 14:26:53 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2024-07-03 14:26:53 0 [Note] Server socket created on IP: '::'. mariadb | 2024-07-03 14:26:53 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Buffer pool(s) load completed at 240703 14:26:53 mariadb | 2024-07-03 14:26:53 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) mariadb | 2024-07-03 14:26:53 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) mariadb | 2024-07-03 14:26:54 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) mariadb | 2024-07-03 14:26:54 32 [Warning] Aborted connection 32 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) =================================== ======== Logs from apex-pdp ======== policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.3:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | kafka (172.17.0.8:9092) open policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-07-03T14:27:23.664+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-07-03T14:27:23.824+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = d8b2b84c-6638-4843-9df5-de6a0e09886f policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-07-03T14:27:24.037+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-07-03T14:27:24.038+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-07-03T14:27:24.038+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016844036 policy-apex-pdp | [2024-07-03T14:27:24.041+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-1, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-07-03T14:27:24.056+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-07-03T14:27:24.056+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2024-07-03T14:27:24.058+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d8b2b84c-6638-4843-9df5-de6a0e09886f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-07-03T14:27:24.088+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = d8b2b84c-6638-4843-9df5-de6a0e09886f policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-07-03T14:27:24.104+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-07-03T14:27:24.104+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-07-03T14:27:24.104+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016844104 policy-apex-pdp | [2024-07-03T14:27:24.105+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-07-03T14:27:24.105+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=33684537-9e09-4f0c-843f-e073056f35f6, alive=false, publisher=null]]: starting policy-apex-pdp | [2024-07-03T14:27:24.118+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.type = none policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2024-07-03T14:27:24.132+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2024-07-03T14:27:24.148+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-07-03T14:27:24.148+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-07-03T14:27:24.148+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016844148 policy-apex-pdp | [2024-07-03T14:27:24.149+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=33684537-9e09-4f0c-843f-e073056f35f6, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2024-07-03T14:27:24.149+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2024-07-03T14:27:24.149+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2024-07-03T14:27:24.152+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2024-07-03T14:27:24.152+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2024-07-03T14:27:24.154+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2024-07-03T14:27:24.154+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2024-07-03T14:27:24.154+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2024-07-03T14:27:24.154+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d8b2b84c-6638-4843-9df5-de6a0e09886f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a policy-apex-pdp | [2024-07-03T14:27:24.154+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d8b2b84c-6638-4843-9df5-de6a0e09886f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2024-07-03T14:27:24.154+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-apex-pdp | [2024-07-03T14:27:24.171+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2024-07-03T14:27:24.190+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ebbab027-f2a3-403a-acca-49e0bd29cb33","timestampMs":1720016844156,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-07-03T14:27:24.401+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2024-07-03T14:27:24.402+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-07-03T14:27:24.402+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2024-07-03T14:27:24.402+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@1ac85b0c{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3dd69f5a{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-07-03T14:27:24.425+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-07-03T14:27:24.425+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-07-03T14:27:24.426+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-apex-pdp | [2024-07-03T14:27:24.434+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@1ac85b0c{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3dd69f5a{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-07-03T14:27:24.543+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: oH0-pHXbT_qkvD-L4l6e0Q policy-apex-pdp | [2024-07-03T14:27:24.546+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2024-07-03T14:27:24.544+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Cluster ID: oH0-pHXbT_qkvD-L4l6e0Q policy-apex-pdp | [2024-07-03T14:27:24.552+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2024-07-03T14:27:24.567+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] (Re-)joining group policy-apex-pdp | [2024-07-03T14:27:24.583+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Request joining group due to: need to re-join with the given member-id: consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2-02e1bbb6-69ac-41b2-aef0-bbd52b0c50ab policy-apex-pdp | [2024-07-03T14:27:24.584+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-apex-pdp | [2024-07-03T14:27:24.584+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] (Re-)joining group policy-apex-pdp | [2024-07-03T14:27:25.091+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2024-07-03T14:27:25.093+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2024-07-03T14:27:25.235+00:00|INFO|RequestLog|qtp739264372-32] 172.17.0.1 - - [03/Jul/2024:14:27:25 +0000] "GET / HTTP/1.1" 401 495 "-" "curl/7.58.0" policy-apex-pdp | [2024-07-03T14:27:27.588+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Successfully joined group with generation Generation{generationId=1, memberId='consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2-02e1bbb6-69ac-41b2-aef0-bbd52b0c50ab', protocol='range'} policy-apex-pdp | [2024-07-03T14:27:27.597+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Finished assignment for group at generation 1: {consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2-02e1bbb6-69ac-41b2-aef0-bbd52b0c50ab=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2024-07-03T14:27:27.603+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Successfully synced group in generation Generation{generationId=1, memberId='consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2-02e1bbb6-69ac-41b2-aef0-bbd52b0c50ab', protocol='range'} policy-apex-pdp | [2024-07-03T14:27:27.604+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2024-07-03T14:27:27.605+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2024-07-03T14:27:27.611+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Found no committed offset for partition policy-pdp-pap-0 policy-apex-pdp | [2024-07-03T14:27:27.621+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-apex-pdp | [2024-07-03T14:27:44.154+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"1b57ba60-4d67-444e-93a3-91747d2da0e8","timestampMs":1720016864154,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-07-03T14:27:44.176+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"1b57ba60-4d67-444e-93a3-91747d2da0e8","timestampMs":1720016864154,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-07-03T14:27:44.179+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-07-03T14:27:44.331+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"99adbbf5-9c0b-4530-9ab3-1c88b54e568b","timestampMs":1720016864262,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-07-03T14:27:44.344+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-apex-pdp | [2024-07-03T14:27:44.344+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"846177ab-c494-4a97-a796-e73ca11e4459","timestampMs":1720016864344,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-07-03T14:27:44.348+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"99adbbf5-9c0b-4530-9ab3-1c88b54e568b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"af09b611-5e53-47d2-baaa-12d8df2d805b","timestampMs":1720016864348,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-07-03T14:27:44.365+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"846177ab-c494-4a97-a796-e73ca11e4459","timestampMs":1720016864344,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-07-03T14:27:44.365+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-07-03T14:27:44.368+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"99adbbf5-9c0b-4530-9ab3-1c88b54e568b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"af09b611-5e53-47d2-baaa-12d8df2d805b","timestampMs":1720016864348,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-07-03T14:27:44.368+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-07-03T14:27:44.396+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"eaa7f9bc-5e08-4e39-9676-3218fb3ee976","timestampMs":1720016864263,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-07-03T14:27:44.398+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"eaa7f9bc-5e08-4e39-9676-3218fb3ee976","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"04f8172c-34b6-4a87-af94-498da42764fc","timestampMs":1720016864398,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-07-03T14:27:44.409+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"eaa7f9bc-5e08-4e39-9676-3218fb3ee976","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"04f8172c-34b6-4a87-af94-498da42764fc","timestampMs":1720016864398,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-07-03T14:27:44.409+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-07-03T14:27:44.432+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7cf59cc2-d930-4715-b62d-a6f327b4fadd","timestampMs":1720016864403,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-07-03T14:27:44.434+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7cf59cc2-d930-4715-b62d-a6f327b4fadd","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ace136ca-8f8d-4044-a45c-5215fb17ad51","timestampMs":1720016864434,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-07-03T14:27:44.442+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7cf59cc2-d930-4715-b62d-a6f327b4fadd","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ace136ca-8f8d-4044-a45c-5215fb17ad51","timestampMs":1720016864434,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-07-03T14:27:44.443+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-07-03T14:27:45.288+00:00|INFO|RequestLog|qtp739264372-31] 172.17.0.1 - policyadmin [03/Jul/2024:14:27:45 +0000] "GET /policy/apex-pdp/v1/healthcheck HTTP/1.1" 200 109 "-" "curl/7.58.0" policy-apex-pdp | [2024-07-03T14:27:56.087+00:00|INFO|RequestLog|qtp739264372-30] 172.17.0.5 - policyadmin [03/Jul/2024:14:27:56 +0000] "GET /metrics HTTP/1.1" 200 10651 "-" "Prometheus/2.53.0" policy-apex-pdp | [2024-07-03T14:28:17.222+00:00|INFO|RequestLog|qtp739264372-26] 172.17.0.6 - policyadmin [03/Jul/2024:14:28:17 +0000] "GET /policy/apex-pdp/v1/healthcheck?null HTTP/1.1" 200 109 "-" "python-requests/2.32.3" policy-apex-pdp | [2024-07-03T14:28:18.769+00:00|INFO|RequestLog|qtp739264372-27] 172.17.0.6 - policyadmin [03/Jul/2024:14:28:18 +0000] "GET /metrics?null HTTP/1.1" 200 11010 "-" "python-requests/2.32.3" policy-apex-pdp | [2024-07-03T14:28:18.791+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.6 - policyadmin [03/Jul/2024:14:28:18 +0000] "GET /policy/apex-pdp/v1/healthcheck?null HTTP/1.1" 200 109 "-" "python-requests/2.32.3" policy-apex-pdp | [2024-07-03T14:28:56.078+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.5 - policyadmin [03/Jul/2024:14:28:56 +0000] "GET /metrics HTTP/1.1" 200 10651 "-" "Prometheus/2.53.0" =================================== ======== Logs from api ======== policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.3:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.10) policy-api | policy-api | [2024-07-03T14:27:02.164+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-api | [2024-07-03T14:27:02.228+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 20 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-07-03T14:27:02.229+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-07-03T14:27:04.184+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-07-03T14:27:04.381+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 186 ms. Found 6 JPA repository interfaces. policy-api | [2024-07-03T14:27:05.166+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-07-03T14:27:05.177+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-07-03T14:27:05.179+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-07-03T14:27:05.179+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-api | [2024-07-03T14:27:05.275+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-07-03T14:27:05.275+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2971 ms policy-api | [2024-07-03T14:27:05.610+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-07-03T14:27:05.678+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final policy-api | [2024-07-03T14:27:05.725+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-07-03T14:27:06.030+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-07-03T14:27:06.063+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-07-03T14:27:06.176+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@67b100fe policy-api | [2024-07-03T14:27:06.179+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-07-03T14:27:08.164+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-07-03T14:27:08.167+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-07-03T14:27:08.862+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-07-03T14:27:09.647+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-07-03T14:27:10.721+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-07-03T14:27:10.885+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@7ce299c6, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@278e5f8e, org.springframework.security.web.context.SecurityContextHolderFilter@93231b2, org.springframework.security.web.header.HeaderWriterFilter@37264d08, org.springframework.security.web.authentication.logout.LogoutFilter@112188cc, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@457512b, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@65d5de1a, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@44d51c85, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@44a3b4d9, org.springframework.security.web.access.ExceptionTranslationFilter@59fe8d94, org.springframework.security.web.access.intercept.AuthorizationFilter@4e42beba] policy-api | [2024-07-03T14:27:11.542+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-07-03T14:27:11.626+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-07-03T14:27:11.647+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-07-03T14:27:11.666+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.188 seconds (process running for 10.802) policy-api | [2024-07-03T14:27:39.922+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-07-03T14:27:39.922+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2024-07-03T14:27:39.923+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2024-07-03T14:28:17.410+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: policy-api | [] =================================== ======== Logs from csit-tests ======== policy-csit | Invoking the robot tests from: apex-pdp-test.robot apex-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV: policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Apex-Pdp-Test & Apex-Slas policy-csit | ============================================================================== policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Runs Apex PDP Health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ExecuteApexSampleDomainPolicy | FAIL | policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 policy-csit | ------------------------------------------------------------------------------ policy-csit | ExecuteApexTestPnfPolicy | FAIL | policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 policy-csit | ------------------------------------------------------------------------------ policy-csit | ExecuteApexTestPnfPolicyWithMetadataSet | FAIL | policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy-apex-pdp is exporting prometheus metrics | FAIL | policy-csit | '# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds. policy-csit | # TYPE process_cpu_seconds_total counter policy-csit | process_cpu_seconds_total 8.34 policy-csit | # HELP process_start_time_seconds Start time of the process since unix epoch in seconds. policy-csit | # TYPE process_start_time_seconds gauge policy-csit | process_start_time_seconds 1.720016842817E9 policy-csit | # HELP process_open_fds Number of open file descriptors. policy-csit | # TYPE process_open_fds gauge policy-csit | process_open_fds 387.0 policy-csit | # HELP process_max_fds Maximum number of open file descriptors. policy-csit | # TYPE process_max_fds gauge policy-csit | process_max_fds 1048576.0 policy-csit | # HELP process_virtual_memory_bytes Virtual memory size in bytes. policy-csit | # TYPE process_virtual_memory_bytes gauge policy-csit | process_virtual_memory_bytes 1.0461679616E10 policy-csit | # HELP process_resident_memory_bytes Resident memory size in bytes. policy-csit | # TYPE process_resident_memory_bytes gauge policy-csit | process_resident_memory_bytes 1.99868416E8 policy-csit | [ Message content over the limit has been removed. ] policy-csit | # TYPE pdpa_policy_deployments_total counter policy-csit | # HELP jvm_memory_pool_allocated_bytes_created Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. policy-csit | # TYPE jvm_memory_pool_allocated_bytes_created gauge policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'profiled nmethods'",} 1.720016844472E9 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Old Gen",} 1.720016844501E9 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Eden Space",} 1.720016844501E9 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-profiled nmethods'",} 1.720016844501E9 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Survivor Space",} 1.720016844501E9 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="Compressed Class Space",} 1.720016844501E9 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="Metaspace",} 1.720016844501E9 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-nmethods'",} 1.720016844501E9 policy-csit | ' does not contain 'pdpa_policy_deployments_total{operation="deploy",status="TOTAL",} 3.0' policy-csit | ------------------------------------------------------------------------------ policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Pdp-Test | FAIL | policy-csit | 5 tests, 1 passed, 4 failed policy-csit | ============================================================================== policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Slas policy-csit | ============================================================================== policy-csit | Healthcheck :: Runs Apex PDP Health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyExecutionAndEventRateLowComplexity :: Validate that ... | FAIL | policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyExecutionAndEventRateModerateComplexity :: Validate ... | FAIL | policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyExecutionAndEventRateHighComplexity :: Validate that... | FAIL | policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 policy-csit | ------------------------------------------------------------------------------ policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyExecutionTimes :: Validate policy execution times us... | FAIL | policy-csit | Resolving variable '${resp['data']['result'][0]['value'][1]}' failed: IndexError: list index out of range policy-csit | ------------------------------------------------------------------------------ policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Slas | FAIL | policy-csit | 6 tests, 2 passed, 4 failed policy-csit | ============================================================================== policy-csit | Apex-Pdp-Test & Apex-Slas | FAIL | policy-csit | 11 tests, 3 passed, 8 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 8 =================================== ======== Logs from policy-db-migrator ======== policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE jpapdpstatistics_enginestats a policy-db-migrator | JOIN pdpstatistics b policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp policy-db-migrator | SET a.id = b.id policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | -------------- policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaproperty policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | -------------- policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | msg policy-db-migrator | upgrade to 1100 completed policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | TRUNCATE TABLE sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE statistics_sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | name version policy-db-migrator | policyadmin 1300 policy-db-migrator | ID script operation from_version to_version tag success atTime policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:58 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:58 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 0307241426541100u 1 2024-07-03 14:26:59 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 0307241426541200u 1 2024-07-03 14:27:00 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 0307241426541200u 1 2024-07-03 14:27:00 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 0307241426541200u 1 2024-07-03 14:27:00 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 0307241426541200u 1 2024-07-03 14:27:00 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 0307241426541300u 1 2024-07-03 14:27:00 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 0307241426541300u 1 2024-07-03 14:27:00 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 0307241426541300u 1 2024-07-03 14:27:00 policy-db-migrator | policyadmin: OK @ 1300 =================================== ======== Logs from pap ======== policy-pap | Waiting for mariadb port 3306... policy-pap | mariadb (172.17.0.3:3306) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.8:9092) open policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.7:6969) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | :: Spring Boot :: (v3.1.10) policy-pap | policy-pap | [2024-07-03T14:27:13.989+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-pap | [2024-07-03T14:27:14.046+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 30 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2024-07-03T14:27:14.047+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-pap | [2024-07-03T14:27:16.108+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2024-07-03T14:27:16.206+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 88 ms. Found 7 JPA repository interfaces. policy-pap | [2024-07-03T14:27:16.630+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-07-03T14:27:16.630+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-07-03T14:27:17.226+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-pap | [2024-07-03T14:27:17.236+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2024-07-03T14:27:17.238+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2024-07-03T14:27:17.238+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-pap | [2024-07-03T14:27:17.325+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2024-07-03T14:27:17.325+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3214 ms policy-pap | [2024-07-03T14:27:17.741+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2024-07-03T14:27:17.803+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final policy-pap | [2024-07-03T14:27:18.145+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2024-07-03T14:27:18.255+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4ee5b2d9 policy-pap | [2024-07-03T14:27:18.258+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2024-07-03T14:27:18.286+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect policy-pap | [2024-07-03T14:27:19.729+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] policy-pap | [2024-07-03T14:27:19.744+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2024-07-03T14:27:20.247+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-pap | [2024-07-03T14:27:20.672+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-pap | [2024-07-03T14:27:20.787+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository policy-pap | [2024-07-03T14:27:21.060+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 98eef03c-6c97-41d2-b0d1-0e3fd148d393 policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-07-03T14:27:21.231+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-07-03T14:27:21.231+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-07-03T14:27:21.231+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016841229 policy-pap | [2024-07-03T14:27:21.233+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-1, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-07-03T14:27:21.234+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-07-03T14:27:21.240+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-07-03T14:27:21.240+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-07-03T14:27:21.240+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016841240 policy-pap | [2024-07-03T14:27:21.241+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-07-03T14:27:21.544+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2024-07-03T14:27:21.687+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2024-07-03T14:27:21.901+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@7cf66cf9, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@38f63756, org.springframework.security.web.context.SecurityContextHolderFilter@574f9e36, org.springframework.security.web.header.HeaderWriterFilter@70aa03c0, org.springframework.security.web.authentication.logout.LogoutFilter@37b80ec7, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@522f0bb8, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@60b4d934, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@41abee65, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3d7caf9c, org.springframework.security.web.access.ExceptionTranslationFilter@5ced0537, org.springframework.security.web.access.intercept.AuthorizationFilter@4c0930c4] policy-pap | [2024-07-03T14:27:22.622+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-pap | [2024-07-03T14:27:22.718+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2024-07-03T14:27:22.743+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-pap | [2024-07-03T14:27:22.761+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2024-07-03T14:27:22.762+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2024-07-03T14:27:22.762+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2024-07-03T14:27:22.763+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2024-07-03T14:27:22.763+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2024-07-03T14:27:22.764+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2024-07-03T14:27:22.764+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2024-07-03T14:27:22.766+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=98eef03c-6c97-41d2-b0d1-0e3fd148d393, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4270705f policy-pap | [2024-07-03T14:27:22.777+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=98eef03c-6c97-41d2-b0d1-0e3fd148d393, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-07-03T14:27:22.777+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 98eef03c-6c97-41d2-b0d1-0e3fd148d393 policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-07-03T14:27:22.783+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-07-03T14:27:22.783+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-07-03T14:27:22.784+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016842783 policy-pap | [2024-07-03T14:27:22.784+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-07-03T14:27:22.786+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2024-07-03T14:27:22.786+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=8195bf70-75c1-45e7-9426-15be720361af, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@18bf1bad policy-pap | [2024-07-03T14:27:22.786+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=8195bf70-75c1-45e7-9426-15be720361af, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-07-03T14:27:22.786+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-07-03T14:27:22.792+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-07-03T14:27:22.792+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-07-03T14:27:22.792+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016842792 policy-pap | [2024-07-03T14:27:22.792+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-07-03T14:27:22.793+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2024-07-03T14:27:22.793+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=8195bf70-75c1-45e7-9426-15be720361af, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-07-03T14:27:22.793+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=98eef03c-6c97-41d2-b0d1-0e3fd148d393, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-07-03T14:27:22.793+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6a1e44dd-949d-4040-bea9-ddb1f234f1b4, alive=false, publisher=null]]: starting policy-pap | [2024-07-03T14:27:22.812+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-07-03T14:27:22.830+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2024-07-03T14:27:22.849+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-07-03T14:27:22.849+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-07-03T14:27:22.849+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016842849 policy-pap | [2024-07-03T14:27:22.849+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6a1e44dd-949d-4040-bea9-ddb1f234f1b4, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-07-03T14:27:22.850+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=32d97e7c-9ca6-43ed-9977-c911814efa02, alive=false, publisher=null]]: starting policy-pap | [2024-07-03T14:27:22.850+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-07-03T14:27:22.851+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2024-07-03T14:27:22.853+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-07-03T14:27:22.854+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-07-03T14:27:22.854+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016842853 policy-pap | [2024-07-03T14:27:22.854+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=32d97e7c-9ca6-43ed-9977-c911814efa02, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-07-03T14:27:22.854+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2024-07-03T14:27:22.854+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2024-07-03T14:27:22.859+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2024-07-03T14:27:22.860+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2024-07-03T14:27:22.862+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2024-07-03T14:27:22.862+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2024-07-03T14:27:22.864+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2024-07-03T14:27:22.864+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2024-07-03T14:27:22.866+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2024-07-03T14:27:22.866+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2024-07-03T14:27:22.867+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2024-07-03T14:27:22.868+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.554 seconds (process running for 10.136) policy-pap | [2024-07-03T14:27:23.262+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2024-07-03T14:27:23.263+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: oH0-pHXbT_qkvD-L4l6e0Q policy-pap | [2024-07-03T14:27:23.263+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: oH0-pHXbT_qkvD-L4l6e0Q policy-pap | [2024-07-03T14:27:23.266+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: oH0-pHXbT_qkvD-L4l6e0Q policy-pap | [2024-07-03T14:27:23.356+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-07-03T14:27:23.356+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Cluster ID: oH0-pHXbT_qkvD-L4l6e0Q policy-pap | [2024-07-03T14:27:23.384+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-07-03T14:27:23.397+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-pap | [2024-07-03T14:27:23.399+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2024-07-03T14:27:23.482+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-07-03T14:27:23.527+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-07-03T14:27:23.591+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-07-03T14:27:23.643+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-07-03T14:27:23.701+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-07-03T14:27:23.749+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-07-03T14:27:23.807+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-07-03T14:27:23.855+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-07-03T14:27:23.914+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-07-03T14:27:23.966+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-07-03T14:27:24.019+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-07-03T14:27:24.073+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-07-03T14:27:24.129+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-07-03T14:27:24.184+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-07-03T14:27:24.244+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-07-03T14:27:24.297+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-07-03T14:27:24.303+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-07-03T14:27:24.337+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-80d763bd-86c7-4846-b555-35a51ccdc9fc policy-pap | [2024-07-03T14:27:24.337+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-07-03T14:27:24.337+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-07-03T14:27:24.348+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-07-03T14:27:24.350+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] (Re-)joining group policy-pap | [2024-07-03T14:27:24.353+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Request joining group due to: need to re-join with the given member-id: consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3-96ab80bf-0485-4af4-8e22-f388a6176ea5 policy-pap | [2024-07-03T14:27:24.353+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-07-03T14:27:24.353+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] (Re-)joining group policy-pap | [2024-07-03T14:27:27.361+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-80d763bd-86c7-4846-b555-35a51ccdc9fc', protocol='range'} policy-pap | [2024-07-03T14:27:27.363+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Successfully joined group with generation Generation{generationId=1, memberId='consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3-96ab80bf-0485-4af4-8e22-f388a6176ea5', protocol='range'} policy-pap | [2024-07-03T14:27:27.373+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-80d763bd-86c7-4846-b555-35a51ccdc9fc=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-07-03T14:27:27.373+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Finished assignment for group at generation 1: {consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3-96ab80bf-0485-4af4-8e22-f388a6176ea5=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-07-03T14:27:27.403+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Successfully synced group in generation Generation{generationId=1, memberId='consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3-96ab80bf-0485-4af4-8e22-f388a6176ea5', protocol='range'} policy-pap | [2024-07-03T14:27:27.404+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-07-03T14:27:27.407+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-80d763bd-86c7-4846-b555-35a51ccdc9fc', protocol='range'} policy-pap | [2024-07-03T14:27:27.407+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-07-03T14:27:27.407+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-07-03T14:27:27.407+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-07-03T14:27:27.430+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-07-03T14:27:27.430+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-07-03T14:27:27.447+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-07-03T14:27:27.447+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-07-03T14:27:41.592+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2024-07-03T14:27:41.592+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' policy-pap | [2024-07-03T14:27:41.595+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 3 ms policy-pap | [2024-07-03T14:27:44.187+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2024-07-03T14:27:44.188+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"1b57ba60-4d67-444e-93a3-91747d2da0e8","timestampMs":1720016864154,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} policy-pap | [2024-07-03T14:27:44.188+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"1b57ba60-4d67-444e-93a3-91747d2da0e8","timestampMs":1720016864154,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} policy-pap | [2024-07-03T14:27:44.195+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2024-07-03T14:27:44.284+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate starting policy-pap | [2024-07-03T14:27:44.284+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate starting listener policy-pap | [2024-07-03T14:27:44.284+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate starting timer policy-pap | [2024-07-03T14:27:44.285+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=99adbbf5-9c0b-4530-9ab3-1c88b54e568b, expireMs=1720016894285] policy-pap | [2024-07-03T14:27:44.287+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate starting enqueue policy-pap | [2024-07-03T14:27:44.287+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=99adbbf5-9c0b-4530-9ab3-1c88b54e568b, expireMs=1720016894285] policy-pap | [2024-07-03T14:27:44.287+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate started policy-pap | [2024-07-03T14:27:44.293+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"99adbbf5-9c0b-4530-9ab3-1c88b54e568b","timestampMs":1720016864262,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-07-03T14:27:44.332+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"99adbbf5-9c0b-4530-9ab3-1c88b54e568b","timestampMs":1720016864262,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-07-03T14:27:44.332+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2024-07-03T14:27:44.333+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"99adbbf5-9c0b-4530-9ab3-1c88b54e568b","timestampMs":1720016864262,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-07-03T14:27:44.333+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2024-07-03T14:27:44.361+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"846177ab-c494-4a97-a796-e73ca11e4459","timestampMs":1720016864344,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} policy-pap | [2024-07-03T14:27:44.362+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2024-07-03T14:27:44.363+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"846177ab-c494-4a97-a796-e73ca11e4459","timestampMs":1720016864344,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} policy-pap | [2024-07-03T14:27:44.366+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"99adbbf5-9c0b-4530-9ab3-1c88b54e568b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"af09b611-5e53-47d2-baaa-12d8df2d805b","timestampMs":1720016864348,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-07-03T14:27:44.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopping policy-pap | [2024-07-03T14:27:44.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopping enqueue policy-pap | [2024-07-03T14:27:44.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopping timer policy-pap | [2024-07-03T14:27:44.383+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=99adbbf5-9c0b-4530-9ab3-1c88b54e568b, expireMs=1720016894285] policy-pap | [2024-07-03T14:27:44.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopping listener policy-pap | [2024-07-03T14:27:44.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopped policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate successful policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e start publishing next request policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange starting policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange starting listener policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange starting timer policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=eaa7f9bc-5e08-4e39-9676-3218fb3ee976, expireMs=1720016894386] policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange starting enqueue policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange started policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=eaa7f9bc-5e08-4e39-9676-3218fb3ee976, expireMs=1720016894386] policy-pap | [2024-07-03T14:27:44.387+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"eaa7f9bc-5e08-4e39-9676-3218fb3ee976","timestampMs":1720016864263,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-07-03T14:27:44.392+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"99adbbf5-9c0b-4530-9ab3-1c88b54e568b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"af09b611-5e53-47d2-baaa-12d8df2d805b","timestampMs":1720016864348,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-07-03T14:27:44.397+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 99adbbf5-9c0b-4530-9ab3-1c88b54e568b policy-pap | [2024-07-03T14:27:44.403+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"eaa7f9bc-5e08-4e39-9676-3218fb3ee976","timestampMs":1720016864263,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-07-03T14:27:44.404+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2024-07-03T14:27:44.409+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"eaa7f9bc-5e08-4e39-9676-3218fb3ee976","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"04f8172c-34b6-4a87-af94-498da42764fc","timestampMs":1720016864398,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-07-03T14:27:44.409+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id eaa7f9bc-5e08-4e39-9676-3218fb3ee976 policy-pap | [2024-07-03T14:27:44.415+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"eaa7f9bc-5e08-4e39-9676-3218fb3ee976","timestampMs":1720016864263,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-07-03T14:27:44.416+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2024-07-03T14:27:44.419+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"eaa7f9bc-5e08-4e39-9676-3218fb3ee976","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"04f8172c-34b6-4a87-af94-498da42764fc","timestampMs":1720016864398,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-07-03T14:27:44.420+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange stopping policy-pap | [2024-07-03T14:27:44.420+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange stopping enqueue policy-pap | [2024-07-03T14:27:44.420+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange stopping timer policy-pap | [2024-07-03T14:27:44.420+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=eaa7f9bc-5e08-4e39-9676-3218fb3ee976, expireMs=1720016894386] policy-pap | [2024-07-03T14:27:44.420+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange stopping listener policy-pap | [2024-07-03T14:27:44.420+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange stopped policy-pap | [2024-07-03T14:27:44.421+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange successful policy-pap | [2024-07-03T14:27:44.421+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e start publishing next request policy-pap | [2024-07-03T14:27:44.421+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate starting policy-pap | [2024-07-03T14:27:44.421+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate starting listener policy-pap | [2024-07-03T14:27:44.421+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate starting timer policy-pap | [2024-07-03T14:27:44.421+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=7cf59cc2-d930-4715-b62d-a6f327b4fadd, expireMs=1720016894421] policy-pap | [2024-07-03T14:27:44.421+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate starting enqueue policy-pap | [2024-07-03T14:27:44.421+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate started policy-pap | [2024-07-03T14:27:44.422+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7cf59cc2-d930-4715-b62d-a6f327b4fadd","timestampMs":1720016864403,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-07-03T14:27:44.431+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7cf59cc2-d930-4715-b62d-a6f327b4fadd","timestampMs":1720016864403,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-07-03T14:27:44.432+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2024-07-03T14:27:44.433+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7cf59cc2-d930-4715-b62d-a6f327b4fadd","timestampMs":1720016864403,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-07-03T14:27:44.433+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2024-07-03T14:27:44.442+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7cf59cc2-d930-4715-b62d-a6f327b4fadd","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ace136ca-8f8d-4044-a45c-5215fb17ad51","timestampMs":1720016864434,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-07-03T14:27:44.442+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 7cf59cc2-d930-4715-b62d-a6f327b4fadd policy-pap | [2024-07-03T14:27:44.445+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7cf59cc2-d930-4715-b62d-a6f327b4fadd","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ace136ca-8f8d-4044-a45c-5215fb17ad51","timestampMs":1720016864434,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-07-03T14:27:44.446+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopping policy-pap | [2024-07-03T14:27:44.446+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopping enqueue policy-pap | [2024-07-03T14:27:44.446+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopping timer policy-pap | [2024-07-03T14:27:44.446+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=7cf59cc2-d930-4715-b62d-a6f327b4fadd, expireMs=1720016894421] policy-pap | [2024-07-03T14:27:44.446+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopping listener policy-pap | [2024-07-03T14:27:44.446+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopped policy-pap | [2024-07-03T14:27:44.450+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate successful policy-pap | [2024-07-03T14:27:44.450+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e has no more requests policy-pap | [2024-07-03T14:28:14.286+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=99adbbf5-9c0b-4530-9ab3-1c88b54e568b, expireMs=1720016894285] policy-pap | [2024-07-03T14:28:14.386+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=eaa7f9bc-5e08-4e39-9676-3218fb3ee976, expireMs=1720016894386] policy-pap | [2024-07-03T14:29:22.866+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms =================================== ======== Logs from prometheus ======== prometheus | ts=2024-07-03T14:26:49.532Z caller=main.go:589 level=info msg="No time or size retention was set so using the default time retention" duration=15d prometheus | ts=2024-07-03T14:26:49.532Z caller=main.go:633 level=info msg="Starting Prometheus Server" mode=server version="(version=2.53.0, branch=HEAD, revision=4c35b9250afefede41c5f5acd76191f90f625898)" prometheus | ts=2024-07-03T14:26:49.532Z caller=main.go:638 level=info build_context="(go=go1.22.4, platform=linux/amd64, user=root@7f8d89cbbd64, date=20240619-07:39:12, tags=netgo,builtinassets,stringlabels)" prometheus | ts=2024-07-03T14:26:49.532Z caller=main.go:639 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" prometheus | ts=2024-07-03T14:26:49.532Z caller=main.go:640 level=info fd_limits="(soft=1048576, hard=1048576)" prometheus | ts=2024-07-03T14:26:49.532Z caller=main.go:641 level=info vm_limits="(soft=unlimited, hard=unlimited)" prometheus | ts=2024-07-03T14:26:49.540Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 prometheus | ts=2024-07-03T14:26:49.541Z caller=main.go:1148 level=info msg="Starting TSDB ..." prometheus | ts=2024-07-03T14:26:49.542Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 prometheus | ts=2024-07-03T14:26:49.542Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 prometheus | ts=2024-07-03T14:26:49.545Z caller=head.go:626 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" prometheus | ts=2024-07-03T14:26:49.545Z caller=head.go:713 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.41µs prometheus | ts=2024-07-03T14:26:49.545Z caller=head.go:721 level=info component=tsdb msg="Replaying WAL, this may take a while" prometheus | ts=2024-07-03T14:26:49.546Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 prometheus | ts=2024-07-03T14:26:49.546Z caller=head.go:830 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=80.241µs wal_replay_duration=565.453µs wbl_replay_duration=170ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.41µs total_replay_duration=719.526µs prometheus | ts=2024-07-03T14:26:49.549Z caller=main.go:1169 level=info fs_type=EXT4_SUPER_MAGIC prometheus | ts=2024-07-03T14:26:49.549Z caller=main.go:1172 level=info msg="TSDB started" prometheus | ts=2024-07-03T14:26:49.549Z caller=main.go:1354 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | ts=2024-07-03T14:26:49.552Z caller=main.go:1391 level=info msg="updated GOGC" old=100 new=75 prometheus | ts=2024-07-03T14:26:49.553Z caller=main.go:1402 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=3.891122ms db_storage=1.29µs remote_storage=1.92µs web_handler=1.7µs query_engine=1.28µs scrape=330.147µs scrape_sd=203.365µs notify=37.86µs notify_sd=43.201µs rules=2.07µs tracing=7.781µs prometheus | ts=2024-07-03T14:26:49.553Z caller=main.go:1133 level=info msg="Server is ready to receive web requests." prometheus | ts=2024-07-03T14:26:49.553Z caller=manager.go:164 level=info component="rule manager" msg="Starting rule manager..." =================================== ======== Logs from simulator ======== simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | overriding logback.xml simulator | 2024-07-03 14:26:51,112 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | 2024-07-03 14:26:51,182 INFO org.onap.policy.models.simulators starting simulator | 2024-07-03 14:26:51,183 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties simulator | 2024-07-03 14:26:51,410 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION simulator | 2024-07-03 14:26:51,411 INFO org.onap.policy.models.simulators starting A&AI simulator simulator | 2024-07-03 14:26:51,543 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-07-03 14:26:51,556 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-07-03 14:26:51,559 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-07-03 14:26:51,567 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-07-03 14:26:51,646 INFO Session workerName=node0 simulator | 2024-07-03 14:26:52,195 INFO Using GSON for REST calls simulator | 2024-07-03 14:26:52,284 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} simulator | 2024-07-03 14:26:52,293 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} simulator | 2024-07-03 14:26:52,309 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1723ms simulator | 2024-07-03 14:26:52,309 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4249 ms. simulator | 2024-07-03 14:26:52,316 INFO org.onap.policy.models.simulators starting SDNC simulator simulator | 2024-07-03 14:26:52,319 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-07-03 14:26:52,319 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-07-03 14:26:52,320 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-07-03 14:26:52,321 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-07-03 14:26:52,341 INFO Session workerName=node0 simulator | 2024-07-03 14:26:52,445 INFO Using GSON for REST calls simulator | 2024-07-03 14:26:52,458 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} simulator | 2024-07-03 14:26:52,460 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2024-07-03 14:26:52,460 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1875ms simulator | 2024-07-03 14:26:52,460 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4859 ms. simulator | 2024-07-03 14:26:52,461 INFO org.onap.policy.models.simulators starting SO simulator simulator | 2024-07-03 14:26:52,464 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-07-03 14:26:52,464 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-07-03 14:26:52,465 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-07-03 14:26:52,466 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-07-03 14:26:52,472 INFO Session workerName=node0 simulator | 2024-07-03 14:26:52,528 INFO Using GSON for REST calls simulator | 2024-07-03 14:26:52,541 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} simulator | 2024-07-03 14:26:52,543 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} simulator | 2024-07-03 14:26:52,544 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1959ms simulator | 2024-07-03 14:26:52,544 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4921 ms. simulator | 2024-07-03 14:26:52,545 INFO org.onap.policy.models.simulators starting VFC simulator simulator | 2024-07-03 14:26:52,549 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-07-03 14:26:52,549 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-07-03 14:26:52,550 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-07-03 14:26:52,551 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-07-03 14:26:52,554 INFO Session workerName=node0 simulator | 2024-07-03 14:26:52,601 INFO Using GSON for REST calls simulator | 2024-07-03 14:26:52,611 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} simulator | 2024-07-03 14:26:52,612 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} simulator | 2024-07-03 14:26:52,612 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @2027ms simulator | 2024-07-03 14:26:52,612 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4938 ms. simulator | 2024-07-03 14:26:52,613 INFO org.onap.policy.models.simulators started =================================== ======== Logs from zookeeper ======== zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2024-07-03 14:26:52,164] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-03 14:26:52,175] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-03 14:26:52,175] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-03 14:26:52,175] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-03 14:26:52,175] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-03 14:26:52,177] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-07-03 14:26:52,177] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-07-03 14:26:52,177] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-07-03 14:26:52,177] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2024-07-03 14:26:52,179] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2024-07-03 14:26:52,180] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-03 14:26:52,180] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-03 14:26:52,180] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-03 14:26:52,180] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-03 14:26:52,180] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-07-03 14:26:52,180] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2024-07-03 14:26:52,199] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@77eca502 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2024-07-03 14:26:52,202] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2024-07-03 14:26:52,202] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2024-07-03 14:26:52,206] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-07-03 14:26:52,217] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,217] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,217] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,217] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,217] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,217] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,217] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,217] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,217] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,217] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,218] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,218] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,219] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,221] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2024-07-03 14:26:52,222] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,222] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,223] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2024-07-03 14:26:52,223] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2024-07-03 14:26:52,224] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-07-03 14:26:52,224] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-07-03 14:26:52,224] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-07-03 14:26:52,224] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-07-03 14:26:52,224] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-07-03 14:26:52,224] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-07-03 14:26:52,226] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,227] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,227] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2024-07-03 14:26:52,227] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2024-07-03 14:26:52,227] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,248] INFO Logging initialized @615ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2024-07-03 14:26:52,355] WARN o.e.j.s.ServletContextHandler@6d5620ce{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-07-03 14:26:52,355] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-07-03 14:26:52,375] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) zookeeper | [2024-07-03 14:26:52,409] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2024-07-03 14:26:52,409] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2024-07-03 14:26:52,410] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper | [2024-07-03 14:26:52,414] WARN ServletContext@o.e.j.s.ServletContextHandler@6d5620ce{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2024-07-03 14:26:52,423] INFO Started o.e.j.s.ServletContextHandler@6d5620ce{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-07-03 14:26:52,442] INFO Started ServerConnector@4d1bf319{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2024-07-03 14:26:52,443] INFO Started @810ms (org.eclipse.jetty.server.Server) zookeeper | [2024-07-03 14:26:52,443] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2024-07-03 14:26:52,452] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2024-07-03 14:26:52,453] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2024-07-03 14:26:52,456] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2024-07-03 14:26:52,457] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2024-07-03 14:26:52,481] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2024-07-03 14:26:52,481] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2024-07-03 14:26:52,483] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-07-03 14:26:52,483] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-07-03 14:26:52,489] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2024-07-03 14:26:52,489] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-07-03 14:26:52,492] INFO Snapshot loaded in 9 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-07-03 14:26:52,493] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-07-03 14:26:52,494] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-07-03 14:26:52,512] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2024-07-03 14:26:52,511] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2024-07-03 14:26:52,535] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2024-07-03 14:26:52,536] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2024-07-03 14:26:57,361] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) =================================== Tearing down containers... time="2024-07-03T14:29:26Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." Container policy-apex-pdp Stopping Container grafana Stopping Container policy-csit Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-apex-pdp Stopped Container policy-apex-pdp Removing Container policy-apex-pdp Removed Container simulator Stopping Container policy-pap Stopping Container simulator Stopped Container simulator Removing Container policy-pap Stopped Container policy-pap Removing Container simulator Removed Container policy-pap Removed Container kafka Stopping Container policy-api Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container mariadb Stopping Container mariadb Stopped Container mariadb Removing Container mariadb Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2297 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. Build step 'Publish Robot Framework test results' changed build result to UNSTABLE [PostBuildScript] - [INFO] Executing post build scripts. [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins6961592831477526347.sh ---> sysstat.sh [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins12617291833414988149.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp ']' + mkdir -p /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp/archives/ [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins7113998865688811890.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-FKTr from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-FKTr/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins12446928576871187660.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp@tmp/config153692768833332721tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins7118233774203852844.sh ---> create-netrc.sh [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins4384182666604703713.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-FKTr from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-FKTr/bin to PATH [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins6560270366241382181.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins3348896855516403847.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-FKTr from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-FKTr/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash -l /tmp/jenkins3943631210644930786.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-FKTr from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-FKTr/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-apex-pdp-master-project-csit-verify-apex-pdp/548 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-21085 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 141G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 889 24713 0 6564 30822 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:61:2c:18 brd ff:ff:ff:ff:ff:ff inet 10.30.106.55/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 83453sec preferred_lft 83453sec inet6 fe80::f816:3eff:fe61:2c18/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:33:71:34:d6 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:33ff:fe71:34d6/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21085) 07/03/24 _x86_64_ (8 CPU) 13:41:27 LINUX RESTART (8 CPU) 13:42:02 tps rtps wtps bread/s bwrtn/s 13:43:01 31.54 13.00 18.54 688.15 17999.80 13:44:01 11.80 0.00 11.80 0.00 16680.82 13:45:01 11.40 0.00 11.40 0.00 16540.35 13:46:01 11.45 0.00 11.45 0.00 16544.44 13:47:01 11.58 0.00 11.58 0.00 16545.64 13:48:01 11.51 0.02 11.50 0.13 16679.49 13:49:01 11.48 0.00 11.48 0.00 16544.58 13:50:01 11.45 0.00 11.45 0.00 16411.40 13:51:01 7.77 0.00 7.77 0.00 10763.81 13:52:01 1.07 0.00 1.07 0.00 13.86 13:53:01 0.93 0.00 0.93 0.00 11.32 13:54:01 0.95 0.00 0.95 0.00 12.80 13:55:01 0.85 0.00 0.85 0.00 10.80 13:56:01 1.05 0.00 1.05 0.00 14.13 13:57:01 5.52 4.07 1.45 32.53 23.86 13:58:01 1.45 0.00 1.45 0.00 19.20 13:59:02 0.92 0.00 0.92 0.00 11.86 14:00:01 1.07 0.00 1.07 0.00 14.51 14:01:01 0.88 0.00 0.88 0.00 11.06 14:02:01 2.50 1.53 0.97 43.19 13.73 14:03:01 1.75 0.53 1.22 13.06 16.93 14:04:01 1.00 0.00 1.00 0.00 13.73 14:05:01 1.05 0.00 1.05 0.00 12.53 14:06:01 0.92 0.00 0.92 0.00 13.06 14:07:01 1.00 0.00 1.00 0.00 12.00 14:08:01 1.00 0.00 1.00 0.00 14.26 14:09:01 1.02 0.00 1.02 0.00 12.53 14:10:01 1.28 0.00 1.28 0.00 16.00 14:11:01 0.80 0.00 0.80 0.00 9.87 14:12:01 0.93 0.00 0.93 0.00 12.13 14:13:01 0.83 0.00 0.83 0.00 9.86 14:14:01 0.73 0.00 0.73 0.00 10.26 14:15:01 1.15 0.00 1.15 0.00 14.40 14:16:01 0.97 0.00 0.97 0.00 12.80 14:17:01 1.02 0.02 1.00 0.13 11.46 14:18:01 1.32 0.00 1.32 0.00 16.80 14:19:01 1.05 0.00 1.05 0.00 13.20 14:20:01 1.10 0.00 1.10 0.00 14.40 14:21:01 0.85 0.00 0.85 0.00 10.66 14:22:01 1.03 0.00 1.03 0.00 13.46 14:23:01 1.07 0.00 1.07 0.00 14.26 14:24:01 1.02 0.00 1.02 0.00 14.40 14:25:01 315.06 37.66 277.40 1765.71 5454.69 14:26:01 178.20 18.75 159.46 2274.29 29905.68 14:27:01 469.29 13.05 456.24 777.37 133191.72 14:28:01 69.81 0.18 69.62 30.79 11895.67 14:29:01 73.84 1.20 72.64 35.99 9115.19 14:30:01 55.37 0.62 54.76 30.93 1247.49 Average: 27.54 1.88 25.66 118.43 6997.79 13:42:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 13:43:01 30642368 31886212 2296852 6.97 38628 1533448 1317216 3.88 632012 1415812 80 13:44:01 30654248 31898212 2284972 6.94 38708 1533452 1317216 3.88 618328 1415784 184 13:45:01 30652252 31896376 2286968 6.94 38796 1533452 1317216 3.88 621400 1415784 16 13:46:01 30647712 31891940 2291508 6.96 38876 1533460 1317216 3.88 626156 1415788 128 13:47:01 30646664 31890984 2292556 6.96 38972 1533456 1333328 3.92 626740 1415796 8 13:48:01 30645208 31889656 2294012 6.96 39060 1533480 1333328 3.92 628224 1415804 64 13:49:01 30638176 31882696 2301044 6.99 39148 1533484 1333328 3.92 634716 1415812 12 13:50:01 30636728 31881380 2302492 6.99 39228 1533488 1333328 3.92 635884 1415808 8 13:51:01 30635124 31879860 2304096 6.99 39300 1533492 1349428 3.97 637308 1415812 148 13:52:01 30634164 31878928 2305056 7.00 39340 1533488 1349428 3.97 638348 1415816 8 13:53:01 30633144 31877976 2306076 7.00 39372 1533500 1349428 3.97 639764 1415820 4 13:54:01 30631656 31876516 2307564 7.01 39404 1533504 1349428 3.97 640700 1415824 8 13:55:01 30630620 31875524 2308600 7.01 39436 1533508 1349428 3.97 642012 1415828 4 13:56:01 30629492 31874420 2309728 7.01 39452 1533516 1349428 3.97 642948 1415836 40 13:57:01 30626304 31872336 2312916 7.02 40480 1533512 1383564 4.07 645032 1415840 176 13:58:01 30625164 31871264 2314056 7.03 40520 1533524 1383564 4.07 646324 1415844 124 13:59:02 30618316 31864456 2320904 7.05 40556 1533524 1383564 4.07 654056 1415844 160 14:00:01 30617288 31863468 2321932 7.05 40596 1533528 1383564 4.07 654952 1415848 12 14:01:01 30616132 31862364 2323088 7.05 40628 1533532 1383564 4.07 656176 1415852 164 14:02:01 30601712 31849448 2337508 7.10 40676 1534956 1383564 4.07 669956 1416880 236 14:03:01 30572336 31821016 2366884 7.19 40732 1535348 1364428 4.01 698200 1416576 20 14:04:01 30571912 31820656 2367308 7.19 40772 1535360 1364428 4.01 698204 1416580 204 14:05:01 30572380 31821176 2366840 7.19 40820 1535364 1364428 4.01 698504 1416584 8 14:06:01 30570700 31819536 2368520 7.19 40852 1535372 1364428 4.01 699604 1416592 208 14:07:01 30570692 31819564 2368528 7.19 40884 1535376 1364428 4.01 699504 1416596 8 14:08:01 30571104 31820008 2368116 7.19 40932 1535364 1364428 4.01 699572 1416600 48 14:09:01 30570856 31819812 2368364 7.19 40956 1535384 1364428 4.01 699660 1416604 40 14:10:01 30570908 31819892 2368312 7.19 40980 1535388 1348172 3.97 699648 1416608 12 14:11:01 30570448 31819456 2368772 7.19 40996 1535392 1348172 3.97 699736 1416612 132 14:12:01 30570440 31819492 2368780 7.19 41012 1535392 1348172 3.97 699872 1416612 152 14:13:01 30570124 31819188 2369096 7.19 41028 1535396 1348172 3.97 699920 1416616 152 14:14:01 30570164 31819280 2369056 7.19 41060 1535404 1348172 3.97 699732 1416624 256 14:15:01 30570248 31819436 2368972 7.19 41100 1535408 1348172 3.97 700112 1416628 156 14:16:01 30570320 31819544 2368900 7.19 41132 1535412 1348172 3.97 700028 1416632 48 14:17:01 30569232 31818528 2369988 7.20 41168 1535412 1348172 3.97 700488 1416636 192 14:18:01 30569280 31818628 2369940 7.19 41200 1535424 1348172 3.97 700160 1416640 176 14:19:01 30568924 31818328 2370296 7.20 41232 1535428 1348172 3.97 700412 1416648 156 14:20:01 30569216 31818584 2370004 7.20 41272 1535432 1348172 3.97 700256 1416652 8 14:21:01 30568904 31818280 2370316 7.20 41304 1535436 1348172 3.97 700344 1416656 132 14:22:01 30569004 31818452 2370216 7.20 41336 1535440 1348172 3.97 700420 1416660 228 14:23:01 30569012 31818496 2370208 7.20 41384 1535436 1348172 3.97 700400 1416664 28 14:24:01 30569056 31818592 2370164 7.20 41416 1535452 1348172 3.97 700456 1416668 224 14:25:01 30147628 31638120 2791592 8.47 60220 1747388 1508580 4.44 939664 1573992 113104 14:26:01 25665172 31576440 7274048 22.08 120520 5949060 2085256 6.14 1070532 5702292 3550700 14:27:01 24807332 30777728 8131888 24.69 145952 5946860 7909160 23.27 2021076 5528276 0 14:28:01 23165520 29526604 9773700 29.67 165332 6281740 9090316 26.75 3388344 5748580 46252 14:29:01 23098884 29497912 9840336 29.87 175700 6304932 9158196 26.95 3435076 5766340 588 14:30:01 25366700 31616692 7572520 22.99 176544 6170484 1546468 4.55 1361972 5634412 2628 Average: 29947062 31712989 2992158 9.08 52896 2017721 1834827 5.40 841728 1863271 77447 13:42:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 13:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:43:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:43:01 ens3 187.49 137.03 523.01 29.73 0.00 0.00 0.00 0.00 13:44:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:44:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 13:44:01 ens3 0.37 0.12 0.07 0.22 0.00 0.00 0.00 0.00 13:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:45:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:45:01 ens3 1.70 0.50 0.48 0.48 0.00 0.00 0.00 0.00 13:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:46:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 13:46:01 ens3 3.17 1.47 2.53 0.84 0.00 0.00 0.00 0.00 13:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:47:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:47:01 ens3 1.17 0.10 0.62 0.01 0.00 0.00 0.00 0.00 13:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:48:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 13:48:01 ens3 1.23 0.27 0.65 0.26 0.00 0.00 0.00 0.00 13:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:49:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:49:01 ens3 1.17 0.42 0.62 0.80 0.00 0.00 0.00 0.00 13:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:50:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 13:50:01 ens3 0.70 0.17 0.23 0.16 0.00 0.00 0.00 0.00 13:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:51:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:51:01 ens3 0.53 0.10 0.16 0.16 0.00 0.00 0.00 0.00 13:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:52:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 13:52:01 ens3 0.33 0.20 0.09 0.17 0.00 0.00 0.00 0.00 13:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:53:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:53:01 ens3 0.35 0.15 0.13 0.35 0.00 0.00 0.00 0.00 13:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:54:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 13:54:01 ens3 0.23 0.15 0.06 0.19 0.00 0.00 0.00 0.00 13:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:55:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:55:01 ens3 0.32 0.10 0.17 0.16 0.00 0.00 0.00 0.00 13:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:56:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 13:56:01 ens3 0.23 0.23 0.06 0.33 0.00 0.00 0.00 0.00 13:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:57:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:57:01 ens3 0.27 0.08 0.06 0.16 0.00 0.00 0.00 0.00 13:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:58:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 13:58:01 ens3 0.57 0.27 0.20 0.08 0.00 0.00 0.00 0.00 13:59:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:59:02 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:59:02 ens3 1.10 0.83 0.62 1.27 0.00 0.00 0.00 0.00 14:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:00:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:00:01 ens3 0.34 0.20 0.07 0.21 0.00 0.00 0.00 0.00 14:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:01:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:01:01 ens3 0.22 0.13 0.06 0.32 0.00 0.00 0.00 0.00 14:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:02:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:02:01 ens3 1.00 0.60 0.91 0.34 0.00 0.00 0.00 0.00 14:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:03:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:03:01 ens3 11.31 10.23 6.03 15.59 0.00 0.00 0.00 0.00 14:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:04:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:04:01 ens3 0.42 0.20 0.07 0.26 0.00 0.00 0.00 0.00 14:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:05:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:05:01 ens3 1.13 0.58 0.42 0.70 0.00 0.00 0.00 0.00 14:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:06:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:06:01 ens3 0.23 0.15 0.06 0.07 0.00 0.00 0.00 0.00 14:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:07:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:07:01 ens3 0.23 0.08 0.06 0.17 0.00 0.00 0.00 0.00 14:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:08:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:08:01 ens3 0.33 0.22 0.13 0.04 0.00 0.00 0.00 0.00 14:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:09:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:09:01 ens3 0.30 0.20 0.11 0.37 0.00 0.00 0.00 0.00 14:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:10:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:10:01 ens3 0.23 0.20 0.06 0.18 0.00 0.00 0.00 0.00 14:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:11:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:11:01 ens3 0.20 0.18 0.06 0.31 0.00 0.00 0.00 0.00 14:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:12:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:12:01 ens3 0.20 0.20 0.06 0.21 0.00 0.00 0.00 0.00 14:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:13:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:13:01 ens3 0.43 0.37 0.15 0.23 0.00 0.00 0.00 0.00 14:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:14:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:14:01 ens3 0.23 0.25 0.06 0.24 0.00 0.00 0.00 0.00 14:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:15:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:15:01 ens3 0.38 0.25 0.18 0.18 0.00 0.00 0.00 0.00 14:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:16:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:16:01 ens3 0.22 0.25 0.06 0.02 0.00 0.00 0.00 0.00 14:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:17:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:17:01 ens3 0.25 0.23 0.06 0.35 0.00 0.00 0.00 0.00 14:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:18:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:18:01 ens3 0.67 0.22 0.18 0.08 0.00 0.00 0.00 0.00 14:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:19:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:19:01 ens3 0.90 0.85 0.52 1.05 0.00 0.00 0.00 0.00 14:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:20:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:20:01 ens3 0.23 0.28 0.06 0.36 0.00 0.00 0.00 0.00 14:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:21:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:21:01 ens3 0.27 0.08 0.06 0.01 0.00 0.00 0.00 0.00 14:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:22:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:22:01 ens3 0.23 0.25 0.06 0.35 0.00 0.00 0.00 0.00 14:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:23:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:23:01 ens3 0.28 0.22 0.13 0.07 0.00 0.00 0.00 0.00 14:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:24:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:24:01 ens3 0.23 0.25 0.06 0.32 0.00 0.00 0.00 0.00 14:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:25:01 lo 0.93 0.93 0.10 0.10 0.00 0.00 0.00 0.00 14:25:01 ens3 148.11 105.87 999.45 46.71 0.00 0.00 0.00 0.00 14:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:26:01 lo 14.66 14.66 1.42 1.42 0.00 0.00 0.00 0.00 14:26:01 ens3 1328.56 752.17 33324.99 63.26 0.00 0.00 0.00 0.00 14:27:01 veth66e193a 0.38 0.50 0.02 0.03 0.00 0.00 0.00 0.00 14:27:01 veth8160a6a 0.38 0.55 0.02 0.03 0.00 0.00 0.00 0.00 14:27:01 veth83c1471 0.00 0.30 0.00 0.02 0.00 0.00 0.00 0.00 14:27:01 br-4e561b93ce74 0.80 0.60 0.07 0.37 0.00 0.00 0.00 0.00 14:28:01 veth66e193a 3.58 4.47 0.71 0.46 0.00 0.00 0.00 0.00 14:28:01 veth8160a6a 11.05 11.76 2.18 1.81 0.00 0.00 0.00 0.00 14:28:01 veth83c1471 0.00 0.12 0.00 0.01 0.00 0.00 0.00 0.00 14:28:01 br-4e561b93ce74 0.50 0.57 0.05 0.04 0.00 0.00 0.00 0.00 14:29:01 veth66e193a 3.43 4.95 0.86 0.39 0.00 0.00 0.00 0.00 14:29:01 veth8160a6a 6.30 9.23 1.46 0.71 0.00 0.00 0.00 0.00 14:29:01 veth83c1471 0.00 0.05 0.00 0.00 0.00 0.00 0.00 0.00 14:29:01 br-4e561b93ce74 0.27 0.10 0.01 0.01 0.00 0.00 0.00 0.00 14:30:01 docker0 12.01 16.81 2.06 284.16 0.00 0.00 0.00 0.00 14:30:01 lo 31.23 31.23 2.78 2.78 0.00 0.00 0.00 0.00 14:30:01 ens3 1813.81 1109.53 36171.29 211.02 0.00 0.00 0.00 0.00 Average: docker0 0.25 0.35 0.04 5.92 0.00 0.00 0.00 0.00 Average: lo 0.57 0.57 0.05 0.05 0.00 0.00 0.00 0.00 Average: ens3 37.57 22.93 752.63 4.38 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21085) 07/03/24 _x86_64_ (8 CPU) 13:41:27 LINUX RESTART (8 CPU) 13:42:02 CPU %user %nice %system %iowait %steal %idle 13:43:01 all 1.83 0.00 0.23 1.02 0.01 96.90 13:43:01 0 2.43 0.00 0.39 7.00 0.03 90.14 13:43:01 1 4.44 0.00 0.24 0.02 0.02 95.29 13:43:01 2 1.34 0.00 0.29 0.64 0.00 97.73 13:43:01 3 2.12 0.00 0.20 0.25 0.00 97.42 13:43:01 4 0.93 0.00 0.20 0.07 0.00 98.79 13:43:01 5 0.53 0.00 0.07 0.19 0.02 99.20 13:43:01 6 1.27 0.00 0.25 0.00 0.00 98.47 13:43:01 7 1.61 0.00 0.17 0.00 0.02 98.20 13:44:01 all 0.17 0.00 0.01 0.91 0.00 98.91 13:44:01 0 1.26 0.00 0.03 7.02 0.03 91.65 13:44:01 1 0.03 0.00 0.00 0.00 0.00 99.97 13:44:01 2 0.00 0.00 0.00 0.02 0.00 99.98 13:44:01 3 0.02 0.00 0.00 0.00 0.00 99.98 13:44:01 4 0.00 0.00 0.00 0.00 0.02 99.98 13:44:01 5 0.03 0.00 0.02 0.20 0.00 99.75 13:44:01 6 0.00 0.00 0.00 0.00 0.00 100.00 13:44:01 7 0.00 0.00 0.00 0.00 0.00 100.00 13:45:01 all 0.03 0.00 0.01 0.89 0.00 99.07 13:45:01 0 0.03 0.00 0.02 7.12 0.03 92.79 13:45:01 1 0.07 0.00 0.00 0.00 0.00 99.93 13:45:01 2 0.00 0.00 0.00 0.00 0.00 100.00 13:45:01 3 0.02 0.00 0.02 0.00 0.00 99.97 13:45:01 4 0.00 0.00 0.00 0.00 0.00 100.00 13:45:01 5 0.05 0.00 0.00 0.03 0.00 99.92 13:45:01 6 0.02 0.00 0.00 0.00 0.00 99.98 13:45:01 7 0.03 0.00 0.00 0.00 0.00 99.97 13:46:01 all 0.03 0.00 0.01 0.98 0.01 98.97 13:46:01 0 0.03 0.00 0.05 7.81 0.03 92.08 13:46:01 1 0.05 0.00 0.00 0.00 0.00 99.95 13:46:01 2 0.03 0.00 0.02 0.00 0.00 99.95 13:46:01 3 0.02 0.00 0.02 0.00 0.00 99.97 13:46:01 4 0.00 0.00 0.00 0.00 0.00 100.00 13:46:01 5 0.02 0.00 0.00 0.00 0.00 99.98 13:46:01 6 0.03 0.00 0.02 0.08 0.00 99.87 13:46:01 7 0.03 0.00 0.00 0.00 0.00 99.97 13:47:01 all 0.13 0.00 0.01 0.95 0.00 98.91 13:47:01 0 1.00 0.00 0.03 7.55 0.02 91.40 13:47:01 1 0.02 0.00 0.00 0.00 0.00 99.98 13:47:01 2 0.00 0.00 0.02 0.00 0.00 99.98 13:47:01 3 0.03 0.00 0.02 0.00 0.00 99.95 13:47:01 4 0.00 0.00 0.02 0.00 0.00 99.98 13:47:01 5 0.02 0.00 0.00 0.00 0.00 99.98 13:47:01 6 0.00 0.00 0.02 0.00 0.00 99.98 13:47:01 7 0.00 0.00 0.00 0.00 0.00 100.00 13:48:01 all 0.06 0.00 0.01 0.85 0.00 99.08 13:48:01 0 0.42 0.00 0.03 6.78 0.03 92.74 13:48:01 1 0.00 0.00 0.00 0.00 0.00 100.00 13:48:01 2 0.00 0.00 0.00 0.00 0.00 100.00 13:48:01 3 0.02 0.00 0.00 0.00 0.00 99.98 13:48:01 4 0.00 0.00 0.02 0.00 0.02 99.97 13:48:01 5 0.02 0.00 0.02 0.00 0.00 99.97 13:48:01 6 0.00 0.00 0.00 0.00 0.00 100.00 13:48:01 7 0.00 0.00 0.00 0.00 0.00 100.00 13:49:01 all 0.05 0.00 0.00 0.93 0.01 99.02 13:49:01 0 0.08 0.00 0.02 7.48 0.03 92.39 13:49:01 1 0.15 0.00 0.00 0.00 0.00 99.85 13:49:01 2 0.03 0.00 0.02 0.00 0.00 99.95 13:49:01 3 0.10 0.00 0.00 0.00 0.02 99.88 13:49:01 4 0.02 0.00 0.00 0.00 0.00 99.98 13:49:01 5 0.00 0.00 0.00 0.00 0.00 100.00 13:49:01 6 0.00 0.00 0.00 0.00 0.00 100.00 13:49:01 7 0.00 0.00 0.00 0.00 0.00 100.00 13:50:01 all 0.14 0.00 0.01 1.09 0.00 98.76 13:50:01 0 1.03 0.00 0.02 8.68 0.03 90.24 13:50:01 1 0.02 0.00 0.00 0.00 0.00 99.98 13:50:01 2 0.02 0.00 0.00 0.00 0.00 99.98 13:50:01 3 0.02 0.00 0.03 0.00 0.00 99.95 13:50:01 4 0.00 0.00 0.00 0.00 0.00 100.00 13:50:01 5 0.02 0.00 0.00 0.00 0.00 99.98 13:50:01 6 0.00 0.00 0.00 0.00 0.00 100.00 13:50:01 7 0.00 0.00 0.00 0.00 0.00 100.00 13:51:01 all 0.03 0.00 0.01 0.66 0.00 99.29 13:51:01 0 0.17 0.00 0.05 5.35 0.03 94.40 13:51:01 1 0.03 0.00 0.00 0.00 0.00 99.97 13:51:01 2 0.02 0.00 0.02 0.00 0.02 99.95 13:51:01 3 0.02 0.00 0.00 0.00 0.00 99.98 13:51:01 4 0.00 0.00 0.00 0.00 0.02 99.98 13:51:01 5 0.00 0.00 0.00 0.00 0.00 100.00 13:51:01 6 0.00 0.00 0.00 0.00 0.00 100.00 13:51:01 7 0.00 0.00 0.00 0.00 0.00 100.00 13:52:01 all 0.01 0.00 0.01 0.00 0.00 99.97 13:52:01 0 0.05 0.00 0.00 0.03 0.02 99.90 13:52:01 1 0.00 0.00 0.00 0.00 0.00 100.00 13:52:01 2 0.00 0.00 0.02 0.00 0.00 99.98 13:52:01 3 0.02 0.00 0.00 0.00 0.00 99.98 13:52:01 4 0.00 0.00 0.02 0.00 0.00 99.98 13:52:01 5 0.00 0.00 0.02 0.00 0.00 99.98 13:52:01 6 0.02 0.00 0.00 0.00 0.00 99.98 13:52:01 7 0.00 0.00 0.00 0.00 0.00 100.00 13:53:01 all 0.01 0.00 0.00 0.00 0.00 99.98 13:53:01 0 0.05 0.00 0.02 0.02 0.02 99.90 13:53:01 1 0.02 0.00 0.00 0.00 0.00 99.98 13:53:01 2 0.00 0.00 0.00 0.00 0.00 100.00 13:53:01 3 0.00 0.00 0.00 0.00 0.00 100.00 13:53:01 4 0.00 0.00 0.00 0.00 0.00 100.00 13:53:01 5 0.00 0.00 0.00 0.00 0.00 100.00 13:53:01 6 0.00 0.00 0.00 0.00 0.00 100.00 13:53:01 7 0.02 0.00 0.00 0.00 0.00 99.98 13:53:01 CPU %user %nice %system %iowait %steal %idle 13:54:01 all 0.01 0.00 0.01 0.00 0.00 99.98 13:54:01 0 0.07 0.00 0.02 0.03 0.03 99.85 13:54:01 1 0.02 0.00 0.02 0.00 0.02 99.95 13:54:01 2 0.00 0.00 0.00 0.00 0.00 100.00 13:54:01 3 0.00 0.00 0.02 0.00 0.00 99.98 13:54:01 4 0.00 0.00 0.00 0.00 0.00 100.00 13:54:01 5 0.02 0.00 0.02 0.00 0.00 99.97 13:54:01 6 0.00 0.00 0.00 0.00 0.00 100.00 13:54:01 7 0.00 0.00 0.00 0.00 0.00 100.00 13:55:01 all 0.01 0.00 0.00 0.00 0.00 99.98 13:55:01 0 0.03 0.00 0.02 0.02 0.02 99.92 13:55:01 1 0.02 0.00 0.00 0.00 0.00 99.98 13:55:01 2 0.00 0.00 0.00 0.00 0.00 100.00 13:55:01 3 0.02 0.00 0.00 0.00 0.00 99.98 13:55:01 4 0.00 0.00 0.00 0.00 0.00 100.00 13:55:01 5 0.00 0.00 0.00 0.00 0.00 100.00 13:55:01 6 0.00 0.00 0.00 0.00 0.00 100.00 13:55:01 7 0.02 0.00 0.00 0.00 0.00 99.98 13:56:01 all 0.05 0.00 0.00 0.00 0.00 99.94 13:56:01 0 0.33 0.00 0.00 0.02 0.03 99.62 13:56:01 1 0.02 0.00 0.00 0.00 0.00 99.98 13:56:01 2 0.00 0.00 0.02 0.00 0.00 99.98 13:56:01 3 0.00 0.00 0.02 0.00 0.00 99.98 13:56:01 4 0.02 0.00 0.00 0.00 0.02 99.97 13:56:01 5 0.00 0.00 0.00 0.00 0.00 100.00 13:56:01 6 0.00 0.00 0.00 0.00 0.00 100.00 13:56:01 7 0.00 0.00 0.00 0.00 0.00 100.00 13:57:01 all 0.26 0.00 0.02 0.01 0.00 99.71 13:57:01 0 2.04 0.00 0.05 0.03 0.02 97.87 13:57:01 1 0.03 0.00 0.00 0.00 0.00 99.97 13:57:01 2 0.03 0.00 0.03 0.03 0.00 99.90 13:57:01 3 0.00 0.00 0.02 0.00 0.00 99.98 13:57:01 4 0.00 0.00 0.00 0.00 0.00 100.00 13:57:01 5 0.00 0.00 0.05 0.00 0.00 99.95 13:57:01 6 0.00 0.00 0.00 0.00 0.00 100.00 13:57:01 7 0.00 0.00 0.00 0.00 0.00 100.00 13:58:01 all 0.26 0.00 0.01 0.00 0.00 99.72 13:58:01 0 2.04 0.00 0.03 0.03 0.02 97.88 13:58:01 1 0.00 0.00 0.00 0.00 0.00 100.00 13:58:01 2 0.00 0.00 0.00 0.00 0.00 100.00 13:58:01 3 0.02 0.00 0.02 0.00 0.00 99.97 13:58:01 4 0.00 0.00 0.00 0.00 0.00 100.00 13:58:01 5 0.02 0.00 0.00 0.00 0.00 99.98 13:58:01 6 0.00 0.00 0.00 0.00 0.00 100.00 13:58:01 7 0.00 0.00 0.00 0.00 0.00 100.00 13:59:02 all 0.03 0.00 0.01 0.00 0.00 99.95 13:59:02 0 0.18 0.00 0.03 0.02 0.02 99.75 13:59:02 1 0.05 0.00 0.00 0.00 0.00 99.95 13:59:02 2 0.00 0.00 0.00 0.00 0.00 100.00 13:59:02 3 0.00 0.00 0.02 0.00 0.00 99.98 13:59:02 4 0.00 0.00 0.00 0.00 0.00 100.00 13:59:02 5 0.02 0.00 0.02 0.00 0.00 99.97 13:59:02 6 0.00 0.00 0.00 0.00 0.00 100.00 13:59:02 7 0.02 0.00 0.00 0.00 0.00 99.98 14:00:01 all 0.03 0.00 0.01 0.00 0.00 99.95 14:00:01 0 0.24 0.00 0.03 0.03 0.03 99.66 14:00:01 1 0.02 0.00 0.00 0.00 0.00 99.98 14:00:01 2 0.02 0.00 0.00 0.00 0.00 99.98 14:00:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:00:01 4 0.00 0.00 0.02 0.00 0.00 99.98 14:00:01 5 0.00 0.00 0.00 0.00 0.00 100.00 14:00:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:00:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:01:01 all 0.11 0.00 0.01 0.00 0.00 99.88 14:01:01 0 0.83 0.00 0.02 0.02 0.00 99.14 14:01:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:01:01 2 0.00 0.00 0.02 0.00 0.00 99.98 14:01:01 3 0.02 0.00 0.02 0.00 0.00 99.97 14:01:01 4 0.00 0.00 0.00 0.00 0.00 100.00 14:01:01 5 0.02 0.00 0.00 0.00 0.00 99.98 14:01:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:01:01 7 0.00 0.00 0.02 0.00 0.00 99.98 14:02:01 all 0.10 0.00 0.00 0.07 0.00 99.82 14:02:01 0 0.23 0.00 0.00 0.52 0.02 99.23 14:02:01 1 0.13 0.00 0.00 0.00 0.00 99.87 14:02:01 2 0.05 0.00 0.00 0.00 0.00 99.95 14:02:01 3 0.18 0.00 0.03 0.05 0.00 99.73 14:02:01 4 0.07 0.00 0.00 0.00 0.02 99.92 14:02:01 5 0.02 0.00 0.00 0.00 0.00 99.98 14:02:01 6 0.02 0.00 0.00 0.00 0.00 99.98 14:02:01 7 0.05 0.00 0.02 0.00 0.02 99.92 14:03:01 all 0.60 0.00 0.03 0.01 0.01 99.35 14:03:01 0 0.23 0.00 0.05 0.08 0.00 99.63 14:03:01 1 2.15 0.00 0.03 0.02 0.00 97.80 14:03:01 2 1.42 0.00 0.00 0.02 0.00 98.57 14:03:01 3 0.37 0.00 0.03 0.00 0.02 99.58 14:03:01 4 0.05 0.00 0.00 0.00 0.00 99.95 14:03:01 5 0.18 0.00 0.02 0.00 0.00 99.80 14:03:01 6 0.07 0.00 0.05 0.00 0.00 99.88 14:03:01 7 0.35 0.00 0.00 0.00 0.02 99.63 14:04:01 all 0.16 0.00 0.01 0.01 0.00 99.82 14:04:01 0 0.03 0.00 0.00 0.07 0.02 99.88 14:04:01 1 0.02 0.00 0.00 0.00 0.00 99.98 14:04:01 2 1.22 0.00 0.02 0.00 0.00 98.76 14:04:01 3 0.00 0.00 0.02 0.00 0.00 99.98 14:04:01 4 0.00 0.00 0.02 0.00 0.00 99.98 14:04:01 5 0.00 0.00 0.02 0.00 0.02 99.97 14:04:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:04:01 7 0.02 0.00 0.02 0.00 0.02 99.95 14:04:01 CPU %user %nice %system %iowait %steal %idle 14:05:01 all 0.01 0.00 0.00 0.02 0.00 99.97 14:05:01 0 0.00 0.00 0.00 0.13 0.00 99.87 14:05:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:05:01 2 0.00 0.00 0.00 0.00 0.02 99.98 14:05:01 3 0.02 0.00 0.00 0.00 0.00 99.98 14:05:01 4 0.02 0.00 0.00 0.00 0.02 99.97 14:05:01 5 0.02 0.00 0.00 0.00 0.00 99.98 14:05:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:05:01 7 0.00 0.00 0.02 0.00 0.02 99.97 14:06:01 all 0.02 0.00 0.01 0.02 0.00 99.95 14:06:01 0 0.03 0.00 0.02 0.17 0.02 99.77 14:06:01 1 0.02 0.00 0.00 0.00 0.00 99.98 14:06:01 2 0.02 0.00 0.00 0.00 0.00 99.98 14:06:01 3 0.05 0.00 0.00 0.00 0.00 99.95 14:06:01 4 0.00 0.00 0.00 0.00 0.00 100.00 14:06:01 5 0.00 0.00 0.00 0.00 0.00 100.00 14:06:01 6 0.02 0.00 0.02 0.00 0.00 99.97 14:06:01 7 0.05 0.00 0.02 0.00 0.02 99.92 14:07:01 all 0.01 0.00 0.00 0.00 0.00 99.98 14:07:01 0 0.03 0.00 0.03 0.03 0.00 99.90 14:07:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:07:01 2 0.02 0.00 0.00 0.00 0.00 99.98 14:07:01 3 0.00 0.00 0.02 0.00 0.00 99.98 14:07:01 4 0.00 0.00 0.00 0.00 0.00 100.00 14:07:01 5 0.00 0.00 0.00 0.00 0.00 100.00 14:07:01 6 0.00 0.00 0.00 0.00 0.02 99.98 14:07:01 7 0.02 0.00 0.00 0.00 0.02 99.97 14:08:01 all 0.15 0.00 0.01 0.01 0.00 99.83 14:08:01 0 0.02 0.00 0.02 0.08 0.00 99.88 14:08:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:08:01 2 1.06 0.00 0.02 0.00 0.00 98.92 14:08:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:08:01 4 0.00 0.00 0.00 0.00 0.02 99.98 14:08:01 5 0.03 0.00 0.03 0.00 0.00 99.93 14:08:01 6 0.02 0.00 0.00 0.00 0.00 99.98 14:08:01 7 0.03 0.00 0.02 0.00 0.02 99.93 14:09:01 all 0.01 0.00 0.01 0.01 0.00 99.97 14:09:01 0 0.02 0.00 0.02 0.05 0.00 99.92 14:09:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:09:01 2 0.00 0.00 0.00 0.00 0.02 99.98 14:09:01 3 0.00 0.00 0.02 0.00 0.00 99.98 14:09:01 4 0.02 0.00 0.02 0.00 0.02 99.95 14:09:01 5 0.02 0.00 0.00 0.00 0.00 99.98 14:09:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:09:01 7 0.02 0.00 0.00 0.00 0.00 99.98 14:10:01 all 0.01 0.00 0.01 0.00 0.00 99.97 14:10:01 0 0.03 0.00 0.02 0.02 0.02 99.92 14:10:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:10:01 2 0.00 0.00 0.02 0.00 0.00 99.98 14:10:01 3 0.02 0.00 0.02 0.00 0.00 99.97 14:10:01 4 0.02 0.00 0.02 0.02 0.02 99.93 14:10:01 5 0.03 0.00 0.00 0.00 0.00 99.97 14:10:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:10:01 7 0.02 0.00 0.00 0.00 0.00 99.98 14:11:01 all 0.01 0.00 0.01 0.00 0.00 99.97 14:11:01 0 0.02 0.00 0.02 0.00 0.00 99.97 14:11:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:11:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:11:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:11:01 4 0.02 0.00 0.02 0.00 0.02 99.95 14:11:01 5 0.02 0.00 0.02 0.00 0.02 99.95 14:11:01 6 0.00 0.00 0.02 0.00 0.00 99.98 14:11:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:12:01 all 0.01 0.00 0.01 0.00 0.00 99.98 14:12:01 0 0.02 0.00 0.02 0.02 0.00 99.95 14:12:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:12:01 2 0.03 0.00 0.00 0.00 0.00 99.97 14:12:01 3 0.02 0.00 0.00 0.00 0.00 99.98 14:12:01 4 0.02 0.00 0.03 0.00 0.02 99.93 14:12:01 5 0.00 0.00 0.02 0.00 0.00 99.98 14:12:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:12:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:13:01 all 0.01 0.00 0.01 0.18 0.00 99.79 14:13:01 0 0.02 0.00 0.02 1.47 0.00 98.50 14:13:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:13:01 2 0.00 0.00 0.02 0.00 0.02 99.97 14:13:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:13:01 4 0.03 0.00 0.02 0.00 0.02 99.93 14:13:01 5 0.02 0.00 0.02 0.00 0.00 99.97 14:13:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:13:01 7 0.03 0.00 0.00 0.00 0.00 99.97 14:14:01 all 0.02 0.00 0.00 0.01 0.00 99.97 14:14:01 0 0.02 0.00 0.00 0.02 0.00 99.97 14:14:01 1 0.00 0.00 0.00 0.03 0.00 99.97 14:14:01 2 0.03 0.00 0.00 0.00 0.00 99.97 14:14:01 3 0.02 0.00 0.00 0.00 0.00 99.98 14:14:01 4 0.03 0.00 0.02 0.00 0.02 99.93 14:14:01 5 0.02 0.00 0.00 0.00 0.02 99.97 14:14:01 6 0.02 0.00 0.00 0.00 0.00 99.98 14:14:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:15:01 all 0.01 0.00 0.01 0.00 0.00 99.98 14:15:01 0 0.02 0.00 0.00 0.02 0.00 99.97 14:15:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:15:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:15:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:15:01 4 0.03 0.00 0.02 0.00 0.00 99.95 14:15:01 5 0.03 0.00 0.02 0.00 0.00 99.95 14:15:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:15:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:15:01 CPU %user %nice %system %iowait %steal %idle 14:16:01 all 0.01 0.00 0.00 0.00 0.00 99.97 14:16:01 0 0.02 0.00 0.02 0.02 0.00 99.95 14:16:01 1 0.02 0.00 0.00 0.00 0.00 99.98 14:16:01 2 0.02 0.00 0.00 0.00 0.00 99.98 14:16:01 3 0.00 0.00 0.02 0.00 0.00 99.98 14:16:01 4 0.05 0.00 0.02 0.00 0.03 99.90 14:16:01 5 0.03 0.00 0.02 0.00 0.00 99.95 14:16:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:16:01 7 0.02 0.00 0.00 0.00 0.00 99.98 14:17:01 all 0.01 0.00 0.01 0.00 0.00 99.97 14:17:01 0 0.02 0.00 0.00 0.02 0.00 99.97 14:17:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:17:01 2 0.00 0.00 0.00 0.00 0.02 99.98 14:17:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:17:01 4 0.05 0.00 0.02 0.00 0.02 99.92 14:17:01 5 0.03 0.00 0.00 0.00 0.00 99.97 14:17:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:17:01 7 0.00 0.00 0.02 0.00 0.00 99.98 14:18:01 all 0.01 0.00 0.01 0.01 0.00 99.97 14:18:01 0 0.05 0.00 0.02 0.05 0.00 99.88 14:18:01 1 0.00 0.00 0.00 0.00 0.02 99.98 14:18:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:18:01 3 0.00 0.00 0.02 0.00 0.00 99.98 14:18:01 4 0.03 0.00 0.03 0.00 0.02 99.92 14:18:01 5 0.02 0.00 0.02 0.00 0.00 99.97 14:18:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:18:01 7 0.00 0.00 0.02 0.00 0.00 99.98 14:19:01 all 0.01 0.00 0.01 0.00 0.00 99.97 14:19:01 0 0.02 0.00 0.00 0.03 0.02 99.93 14:19:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:19:01 2 0.00 0.00 0.02 0.00 0.00 99.98 14:19:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:19:01 4 0.07 0.00 0.00 0.00 0.02 99.92 14:19:01 5 0.02 0.00 0.00 0.00 0.02 99.97 14:19:01 6 0.02 0.00 0.02 0.00 0.00 99.97 14:19:01 7 0.00 0.00 0.00 0.00 0.02 99.98 14:20:01 all 0.05 0.00 0.01 0.00 0.00 99.93 14:20:01 0 0.02 0.00 0.00 0.02 0.00 99.97 14:20:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:20:01 2 0.32 0.00 0.02 0.00 0.02 99.65 14:20:01 3 0.02 0.00 0.02 0.00 0.00 99.97 14:20:01 4 0.03 0.00 0.02 0.00 0.02 99.93 14:20:01 5 0.03 0.00 0.02 0.00 0.00 99.95 14:20:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:20:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:21:01 all 0.26 0.00 0.00 0.00 0.00 99.73 14:21:01 0 0.02 0.00 0.02 0.02 0.00 99.95 14:21:01 1 0.00 0.00 0.00 0.02 0.00 99.98 14:21:01 2 1.97 0.00 0.00 0.00 0.00 98.03 14:21:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:21:01 4 0.02 0.00 0.00 0.00 0.02 99.97 14:21:01 5 0.02 0.00 0.02 0.00 0.00 99.97 14:21:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:21:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:22:01 all 0.09 0.00 0.01 0.00 0.01 99.89 14:22:01 0 0.03 0.00 0.02 0.02 0.00 99.93 14:22:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:22:01 2 0.68 0.00 0.00 0.00 0.00 99.32 14:22:01 3 0.02 0.00 0.00 0.00 0.00 99.98 14:22:01 4 0.03 0.00 0.02 0.00 0.02 99.93 14:22:01 5 0.02 0.00 0.00 0.00 0.02 99.97 14:22:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:22:01 7 0.00 0.00 0.02 0.00 0.00 99.98 14:23:01 all 0.02 0.00 0.01 0.00 0.00 99.97 14:23:01 0 0.03 0.00 0.00 0.03 0.00 99.93 14:23:01 1 0.02 0.00 0.00 0.00 0.00 99.98 14:23:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:23:01 3 0.02 0.00 0.02 0.00 0.00 99.97 14:23:01 4 0.03 0.00 0.02 0.00 0.03 99.92 14:23:01 5 0.02 0.00 0.03 0.00 0.00 99.95 14:23:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:23:01 7 0.02 0.00 0.00 0.00 0.00 99.98 14:24:01 all 0.01 0.00 0.00 0.00 0.00 99.98 14:24:01 0 0.00 0.00 0.00 0.02 0.02 99.97 14:24:01 1 0.00 0.00 0.00 0.02 0.00 99.98 14:24:01 2 0.00 0.00 0.00 0.00 0.02 99.98 14:24:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:24:01 4 0.02 0.00 0.00 0.00 0.02 99.97 14:24:01 5 0.02 0.00 0.00 0.00 0.00 99.98 14:24:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:24:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:25:01 all 6.17 0.00 0.68 1.19 0.02 91.94 14:25:01 0 6.07 0.00 0.80 3.68 0.02 89.43 14:25:01 1 2.87 0.00 0.27 0.30 0.02 96.54 14:25:01 2 0.77 0.00 0.63 0.10 0.02 98.48 14:25:01 3 1.52 0.00 0.33 0.03 0.02 98.10 14:25:01 4 20.64 0.00 1.05 1.28 0.03 76.99 14:25:01 5 9.22 0.00 0.70 0.27 0.03 89.78 14:25:01 6 6.20 0.00 0.82 2.04 0.03 90.91 14:25:01 7 2.05 0.00 0.83 1.82 0.00 95.30 14:26:01 all 15.58 0.00 4.90 5.71 0.07 73.74 14:26:01 0 17.28 0.00 4.92 2.03 0.03 75.74 14:26:01 1 20.99 0.00 5.15 8.15 0.07 65.64 14:26:01 2 11.61 0.00 4.60 2.46 0.07 81.27 14:26:01 3 10.18 0.00 4.98 3.69 0.10 81.05 14:26:01 4 30.79 0.00 5.28 4.28 0.07 59.59 14:26:01 5 11.88 0.00 4.85 12.77 0.05 70.46 14:26:01 6 10.08 0.00 5.41 2.26 0.08 82.16 14:26:01 7 12.53 0.00 4.03 9.97 0.05 73.42 14:26:01 CPU %user %nice %system %iowait %steal %idle 14:27:01 all 8.50 0.00 2.78 13.84 0.05 74.83 14:27:01 0 8.59 0.00 2.88 5.56 0.05 82.91 14:27:01 1 6.70 0.00 2.88 12.49 0.03 77.90 14:27:01 2 6.67 0.00 2.83 7.90 0.05 82.55 14:27:01 3 9.64 0.00 2.60 2.96 0.07 84.74 14:27:01 4 9.06 0.00 3.48 59.02 0.10 28.33 14:27:01 5 9.28 0.00 2.56 21.17 0.03 66.96 14:27:01 6 9.98 0.00 2.37 1.09 0.03 86.52 14:27:01 7 8.10 0.00 2.60 0.91 0.03 88.36 14:28:01 all 21.96 0.00 2.46 1.04 0.08 74.46 14:28:01 0 23.39 0.00 3.32 0.10 0.07 73.12 14:28:01 1 19.03 0.00 2.03 2.80 0.10 76.04 14:28:01 2 24.74 0.00 2.58 0.65 0.08 71.94 14:28:01 3 21.04 0.00 2.27 0.85 0.07 75.77 14:28:01 4 17.06 0.00 1.78 0.17 0.08 80.91 14:28:01 5 24.37 0.00 2.52 1.12 0.07 71.92 14:28:01 6 20.97 0.00 2.45 2.63 0.08 73.86 14:28:01 7 25.08 0.00 2.70 0.02 0.08 72.12 14:29:01 all 4.36 0.00 1.03 0.95 0.05 93.61 14:29:01 0 3.68 0.00 0.92 0.10 0.03 95.27 14:29:01 1 5.55 0.00 0.94 0.00 0.03 93.48 14:29:01 2 5.78 0.00 1.37 0.80 0.07 91.98 14:29:01 3 2.84 0.00 0.89 0.38 0.07 95.82 14:29:01 4 4.90 0.00 1.17 1.78 0.05 92.10 14:29:01 5 4.00 0.00 0.75 0.03 0.03 95.18 14:29:01 6 3.31 0.00 0.97 3.85 0.03 91.83 14:29:01 7 4.84 0.00 1.24 0.62 0.08 93.21 14:30:01 all 2.47 0.00 0.70 0.16 0.04 96.62 14:30:01 0 2.33 0.00 0.87 0.05 0.05 96.70 14:30:01 1 1.57 0.00 0.60 0.13 0.02 97.68 14:30:01 2 2.67 0.00 0.92 0.20 0.05 96.16 14:30:01 3 3.79 0.00 0.82 0.13 0.03 95.23 14:30:01 4 3.34 0.00 0.60 0.20 0.03 95.83 14:30:01 5 1.92 0.00 0.53 0.17 0.03 97.34 14:30:01 6 2.19 0.00 0.58 0.05 0.03 97.15 14:30:01 7 1.97 0.00 0.70 0.40 0.07 96.86 Average: all 1.32 0.00 0.27 0.65 0.01 97.74 Average: 0 1.55 0.00 0.31 1.65 0.02 96.47 Average: 1 1.32 0.00 0.25 0.50 0.01 97.92 Average: 2 1.26 0.00 0.28 0.27 0.01 98.19 Average: 3 1.08 0.00 0.26 0.17 0.01 98.48 Average: 4 1.79 0.00 0.28 1.37 0.02 96.54 Average: 5 1.29 0.00 0.26 0.75 0.01 97.70 Average: 6 1.12 0.00 0.27 0.25 0.01 98.35 Average: 7 1.18 0.00 0.26 0.28 0.01 98.27