13:41:07 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/138374 13:41:07 Running as SYSTEM 13:41:07 [EnvInject] - Loading node environment variables. 13:41:07 Building remotely on prd-ubuntu1804-docker-8c-8g-21073 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres 13:41:07 [ssh-agent] Looking for ssh-agent implementation... 13:41:07 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 13:41:07 $ ssh-agent 13:41:07 SSH_AUTH_SOCK=/tmp/ssh-NsUJfDY31elN/agent.2074 13:41:07 SSH_AGENT_PID=2076 13:41:07 [ssh-agent] Started. 13:41:07 Running ssh-add (command line suppressed) 13:41:07 Identity added: /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres@tmp/private_key_10903948067921039097.key (/w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres@tmp/private_key_10903948067921039097.key) 13:41:07 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 13:41:07 The recommended git tool is: NONE 13:41:18 using credential onap-jenkins-ssh 13:41:18 Wiping out workspace first. 13:41:18 Cloning the remote Git repository 13:41:18 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 13:41:18 > git init /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres # timeout=10 13:41:18 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 13:41:18 > git --version # timeout=10 13:41:18 > git --version # 'git version 2.17.1' 13:41:18 using GIT_SSH to set credentials Gerrit user 13:41:18 Verifying host key using manually-configured host key entries 13:41:18 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 13:41:18 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 13:41:18 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 13:41:19 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 13:41:19 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 13:41:19 using GIT_SSH to set credentials Gerrit user 13:41:19 Verifying host key using manually-configured host key entries 13:41:19 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/74/138374/1 # timeout=30 13:41:19 > git rev-parse 3bda5434d54552b68d25669ae2c12263df23b5bc^{commit} # timeout=10 13:41:19 Checking out Revision 3bda5434d54552b68d25669ae2c12263df23b5bc (refs/changes/74/138374/1) 13:41:19 > git config core.sparsecheckout # timeout=10 13:41:19 > git checkout -f 3bda5434d54552b68d25669ae2c12263df23b5bc # timeout=30 13:41:22 Commit message: "Fixes for CSIT" 13:41:22 > git rev-parse FETCH_HEAD^{commit} # timeout=10 13:41:22 > git rev-list --no-walk 54d234de0d9260f610425cd496a52265a4082441 # timeout=10 13:41:23 provisioning config files... 13:41:23 copy managed file [npmrc] to file:/home/jenkins/.npmrc 13:41:24 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 13:41:24 [policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres] $ /bin/bash /tmp/jenkins16037931819232774589.sh 13:41:24 ---> python-tools-install.sh 13:41:24 Setup pyenv: 13:41:24 * system (set by /opt/pyenv/version) 13:41:24 * 3.8.13 (set by /opt/pyenv/version) 13:41:24 * 3.9.13 (set by /opt/pyenv/version) 13:41:24 * 3.10.6 (set by /opt/pyenv/version) 13:41:28 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-ZZmm 13:41:28 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 13:41:35 lf-activate-venv(): INFO: Installing: lftools 13:42:06 lf-activate-venv(): INFO: Adding /tmp/venv-ZZmm/bin to PATH 13:42:06 Generating Requirements File 13:42:26 Python 3.10.6 13:42:26 pip 24.1.1 from /tmp/venv-ZZmm/lib/python3.10/site-packages/pip (python 3.10) 13:42:26 appdirs==1.4.4 13:42:26 argcomplete==3.4.0 13:42:26 aspy.yaml==1.3.0 13:42:26 attrs==23.2.0 13:42:26 autopage==0.5.2 13:42:26 beautifulsoup4==4.12.3 13:42:26 boto3==1.34.138 13:42:26 botocore==1.34.138 13:42:26 bs4==0.0.2 13:42:26 cachetools==5.3.3 13:42:26 certifi==2024.6.2 13:42:26 cffi==1.16.0 13:42:26 cfgv==3.4.0 13:42:26 chardet==5.2.0 13:42:26 charset-normalizer==3.3.2 13:42:26 click==8.1.7 13:42:26 cliff==4.7.0 13:42:26 cmd2==2.4.3 13:42:26 cryptography==3.3.2 13:42:26 debtcollector==3.0.0 13:42:26 decorator==5.1.1 13:42:26 defusedxml==0.7.1 13:42:26 Deprecated==1.2.14 13:42:26 distlib==0.3.8 13:42:26 dnspython==2.6.1 13:42:26 docker==4.2.2 13:42:26 dogpile.cache==1.3.3 13:42:26 email_validator==2.2.0 13:42:26 filelock==3.15.4 13:42:26 future==1.0.0 13:42:26 gitdb==4.0.11 13:42:26 GitPython==3.1.43 13:42:26 google-auth==2.31.0 13:42:26 httplib2==0.22.0 13:42:26 identify==2.5.36 13:42:26 idna==3.7 13:42:26 importlib-resources==1.5.0 13:42:26 iso8601==2.1.0 13:42:26 Jinja2==3.1.4 13:42:26 jmespath==1.0.1 13:42:26 jsonpatch==1.33 13:42:26 jsonpointer==3.0.0 13:42:26 jsonschema==4.22.0 13:42:26 jsonschema-specifications==2023.12.1 13:42:26 keystoneauth1==5.6.0 13:42:26 kubernetes==30.1.0 13:42:26 lftools==0.37.10 13:42:26 lxml==5.2.2 13:42:26 MarkupSafe==2.1.5 13:42:26 msgpack==1.0.8 13:42:26 multi_key_dict==2.0.3 13:42:26 munch==4.0.0 13:42:26 netaddr==1.3.0 13:42:26 netifaces==0.11.0 13:42:26 niet==1.4.2 13:42:26 nodeenv==1.9.1 13:42:26 oauth2client==4.1.3 13:42:26 oauthlib==3.2.2 13:42:26 openstacksdk==3.2.0 13:42:26 os-client-config==2.1.0 13:42:26 os-service-types==1.7.0 13:42:26 osc-lib==3.0.1 13:42:26 oslo.config==9.4.0 13:42:26 oslo.context==5.5.0 13:42:26 oslo.i18n==6.3.0 13:42:26 oslo.log==6.0.0 13:42:26 oslo.serialization==5.4.0 13:42:26 oslo.utils==7.1.0 13:42:26 packaging==24.1 13:42:26 pbr==6.0.0 13:42:26 platformdirs==4.2.2 13:42:26 prettytable==3.10.0 13:42:26 pyasn1==0.6.0 13:42:26 pyasn1_modules==0.4.0 13:42:26 pycparser==2.22 13:42:26 pygerrit2==2.0.15 13:42:26 PyGithub==2.3.0 13:42:26 PyJWT==2.8.0 13:42:26 PyNaCl==1.5.0 13:42:26 pyparsing==2.4.7 13:42:26 pyperclip==1.9.0 13:42:26 pyrsistent==0.20.0 13:42:26 python-cinderclient==9.5.0 13:42:26 python-dateutil==2.9.0.post0 13:42:26 python-heatclient==3.5.0 13:42:26 python-jenkins==1.8.2 13:42:26 python-keystoneclient==5.4.0 13:42:26 python-magnumclient==4.5.0 13:42:26 python-novaclient==18.6.0 13:42:26 python-openstackclient==6.6.0 13:42:26 python-swiftclient==4.6.0 13:42:26 PyYAML==6.0.1 13:42:26 referencing==0.35.1 13:42:26 requests==2.32.3 13:42:26 requests-oauthlib==2.0.0 13:42:26 requestsexceptions==1.4.0 13:42:26 rfc3986==2.0.0 13:42:26 rpds-py==0.18.1 13:42:26 rsa==4.9 13:42:26 ruamel.yaml==0.18.6 13:42:26 ruamel.yaml.clib==0.2.8 13:42:26 s3transfer==0.10.2 13:42:26 simplejson==3.19.2 13:42:26 six==1.16.0 13:42:26 smmap==5.0.1 13:42:26 soupsieve==2.5 13:42:26 stevedore==5.2.0 13:42:26 tabulate==0.9.0 13:42:26 toml==0.10.2 13:42:26 tomlkit==0.12.5 13:42:26 tqdm==4.66.4 13:42:26 typing_extensions==4.12.2 13:42:26 tzdata==2024.1 13:42:26 urllib3==1.26.19 13:42:26 virtualenv==20.26.3 13:42:26 wcwidth==0.2.13 13:42:26 websocket-client==1.8.0 13:42:26 wrapt==1.16.0 13:42:26 xdg==6.0.0 13:42:26 xmltodict==0.13.0 13:42:26 yq==3.4.3 13:42:26 [EnvInject] - Injecting environment variables from a build step. 13:42:26 [EnvInject] - Injecting as environment variables the properties content 13:42:26 SET_JDK_VERSION=openjdk17 13:42:26 GIT_URL="git://cloud.onap.org/mirror" 13:42:26 13:42:26 [EnvInject] - Variables injected successfully. 13:42:26 [policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres] $ /bin/sh /tmp/jenkins17378903264984435733.sh 13:42:26 ---> update-java-alternatives.sh 13:42:26 ---> Updating Java version 13:42:26 ---> Ubuntu/Debian system detected 13:42:27 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 13:42:27 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 13:42:27 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 13:42:27 openjdk version "17.0.4" 2022-07-19 13:42:27 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 13:42:27 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 13:42:27 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 13:42:27 [EnvInject] - Injecting environment variables from a build step. 13:42:27 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 13:42:27 [EnvInject] - Variables injected successfully. 13:42:27 [policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres] $ /bin/sh -xe /tmp/jenkins12110968621511761738.sh 13:42:27 + /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres/csit/run-project-csit.sh apex-pdp-postgres 13:42:27 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 13:42:27 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 13:42:27 Configure a credential helper to remove this warning. See 13:42:27 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 13:42:27 13:42:27 Login Succeeded 13:42:27 docker: 'compose' is not a docker command. 13:42:27 See 'docker --help' 13:42:27 Docker Compose Plugin not installed. Installing now... 13:42:27 % Total % Received % Xferd Average Speed Time Time Time Current 13:42:27 Dload Upload Total Spent Left Speed 13:42:28 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 13:42:28 52 60.0M 52 31.2M 0 0 73.1M 0 --:--:-- --:--:-- --:--:-- 73.1M 100 60.0M 100 60.0M 0 0 98.0M 0 --:--:-- --:--:-- --:--:-- 155M 13:42:28 Setting project configuration for: apex-pdp-postgres 13:42:28 Configuring docker compose... 13:42:30 Starting apex-pdp application with Grafana 13:42:30 apex-pdp Pulling 13:42:30 simulator Pulling 13:42:30 kafka Pulling 13:42:30 mariadb Pulling 13:42:30 prometheus Pulling 13:42:30 grafana Pulling 13:42:30 pap Pulling 13:42:30 api Pulling 13:42:30 zookeeper Pulling 13:42:30 policy-db-migrator Pulling 13:42:30 31e352740f53 Pulling fs layer 13:42:30 57703e441b07 Pulling fs layer 13:42:30 7138254c3790 Pulling fs layer 13:42:30 78f39bed0e83 Pulling fs layer 13:42:30 40796999d308 Pulling fs layer 13:42:30 14ddc757aae0 Pulling fs layer 13:42:30 ebe1cd824584 Pulling fs layer 13:42:30 d2893dc6732f Pulling fs layer 13:42:30 a23a963fcebe Pulling fs layer 13:42:30 369dfa39565e Pulling fs layer 13:42:30 9146eb587aa8 Pulling fs layer 13:42:30 a120f6888c1f Pulling fs layer 13:42:30 78f39bed0e83 Waiting 13:42:30 40796999d308 Waiting 13:42:30 14ddc757aae0 Waiting 13:42:30 ebe1cd824584 Waiting 13:42:30 d2893dc6732f Waiting 13:42:30 9146eb587aa8 Waiting 13:42:30 a120f6888c1f Waiting 13:42:30 a23a963fcebe Waiting 13:42:30 369dfa39565e Waiting 13:42:30 31e352740f53 Pulling fs layer 13:42:30 21c7cf7066d0 Pulling fs layer 13:42:30 eb5e31f0ecf8 Pulling fs layer 13:42:30 4faab25371b2 Pulling fs layer 13:42:30 6b867d96d427 Pulling fs layer 13:42:30 93832cc54357 Pulling fs layer 13:42:30 21c7cf7066d0 Waiting 13:42:30 eb5e31f0ecf8 Waiting 13:42:30 4faab25371b2 Waiting 13:42:30 6b867d96d427 Waiting 13:42:30 93832cc54357 Waiting 13:42:30 31e352740f53 Pulling fs layer 13:42:30 e8bf24a82546 Pulling fs layer 13:42:30 154b803e2d93 Pulling fs layer 13:42:30 e4305231c991 Pulling fs layer 13:42:30 f469048fbe8d Pulling fs layer 13:42:30 c189e028fabb Pulling fs layer 13:42:30 c9bd119720e4 Pulling fs layer 13:42:30 c9bd119720e4 Waiting 13:42:30 f469048fbe8d Waiting 13:42:30 e8bf24a82546 Waiting 13:42:30 e4305231c991 Waiting 13:42:30 c189e028fabb Waiting 13:42:30 31e352740f53 Pulling fs layer 13:42:30 257d54e26411 Pulling fs layer 13:42:30 215302b53935 Pulling fs layer 13:42:30 eb2f448c7730 Pulling fs layer 13:42:30 c8ee90c58894 Pulling fs layer 13:42:30 e30cdb86c4f0 Pulling fs layer 13:42:30 c990b7e46fc8 Pulling fs layer 13:42:30 e30cdb86c4f0 Waiting 13:42:30 eb2f448c7730 Waiting 13:42:30 c8ee90c58894 Waiting 13:42:30 c990b7e46fc8 Waiting 13:42:30 215302b53935 Waiting 13:42:30 257d54e26411 Waiting 13:42:30 31e352740f53 Pulling fs layer 13:42:30 21c7cf7066d0 Pulling fs layer 13:42:30 c3cc5e3d19ac Pulling fs layer 13:42:30 0d2280d71230 Pulling fs layer 13:42:30 984932e12fb0 Pulling fs layer 13:42:30 5687ac571232 Pulling fs layer 13:42:30 deac262509a5 Pulling fs layer 13:42:30 21c7cf7066d0 Waiting 13:42:30 0d2280d71230 Waiting 13:42:30 5687ac571232 Waiting 13:42:30 deac262509a5 Waiting 13:42:30 c3cc5e3d19ac Waiting 13:42:30 9fa9226be034 Pulling fs layer 13:42:30 1617e25568b2 Pulling fs layer 13:42:30 3ecda1bfd07b Pulling fs layer 13:42:30 ac9f4de4b762 Pulling fs layer 13:42:30 ea63b2e6315f Pulling fs layer 13:42:30 fbd390d3bd00 Pulling fs layer 13:42:30 9b1ac15ef728 Pulling fs layer 13:42:30 9fa9226be034 Waiting 13:42:30 1617e25568b2 Waiting 13:42:30 3ecda1bfd07b Waiting 13:42:30 8682f304eb80 Pulling fs layer 13:42:30 5fbafe078afc Pulling fs layer 13:42:30 7fb53fd2ae10 Pulling fs layer 13:42:30 592798bd3683 Pulling fs layer 13:42:30 473fdc983780 Pulling fs layer 13:42:30 ac9f4de4b762 Waiting 13:42:30 ea63b2e6315f Waiting 13:42:30 5fbafe078afc Waiting 13:42:30 fbd390d3bd00 Waiting 13:42:30 9b1ac15ef728 Waiting 13:42:30 592798bd3683 Waiting 13:42:30 473fdc983780 Waiting 13:42:30 7fb53fd2ae10 Waiting 13:42:30 31e352740f53 Downloading [> ] 48.06kB/3.398MB 13:42:30 31e352740f53 Downloading [> ] 48.06kB/3.398MB 13:42:30 31e352740f53 Downloading [> ] 48.06kB/3.398MB 13:42:30 31e352740f53 Downloading [> ] 48.06kB/3.398MB 13:42:30 31e352740f53 Downloading [> ] 48.06kB/3.398MB 13:42:30 7138254c3790 Downloading [> ] 343kB/32.98MB 13:42:30 4abcf2066143 Pulling fs layer 13:42:30 c0e05c86127e Pulling fs layer 13:42:30 706651a94df6 Pulling fs layer 13:42:30 33e0a01314cc Pulling fs layer 13:42:30 f8b444c6ff40 Pulling fs layer 13:42:30 e6c38e6d3add Pulling fs layer 13:42:30 706651a94df6 Waiting 13:42:30 6ca01427385e Pulling fs layer 13:42:30 e35e8e85e24d Pulling fs layer 13:42:30 d0bef95bc6b2 Pulling fs layer 13:42:30 af860903a445 Pulling fs layer 13:42:30 33e0a01314cc Waiting 13:42:30 4abcf2066143 Waiting 13:42:30 e35e8e85e24d Waiting 13:42:30 f8b444c6ff40 Waiting 13:42:30 d0bef95bc6b2 Waiting 13:42:30 e6c38e6d3add Waiting 13:42:30 6ca01427385e Waiting 13:42:30 c0e05c86127e Waiting 13:42:30 57703e441b07 Downloading [> ] 539.6kB/73.93MB 13:42:30 10ac4908093d Pulling fs layer 13:42:30 44779101e748 Pulling fs layer 13:42:30 a721db3e3f3d Pulling fs layer 13:42:30 1850a929b84a Pulling fs layer 13:42:30 397a918c7da3 Pulling fs layer 13:42:30 806be17e856d Pulling fs layer 13:42:30 634de6c90876 Pulling fs layer 13:42:30 cd00854cfb1a Pulling fs layer 13:42:30 397a918c7da3 Waiting 13:42:30 10ac4908093d Waiting 13:42:30 44779101e748 Waiting 13:42:30 806be17e856d Waiting 13:42:30 634de6c90876 Waiting 13:42:30 a721db3e3f3d Waiting 13:42:30 cd00854cfb1a Waiting 13:42:30 1850a929b84a Waiting 13:42:30 31e352740f53 Verifying Checksum 13:42:30 31e352740f53 Download complete 13:42:30 31e352740f53 Verifying Checksum 13:42:30 31e352740f53 Download complete 13:42:30 31e352740f53 Download complete 13:42:30 31e352740f53 Download complete 13:42:30 31e352740f53 Download complete 13:42:30 31e352740f53 Extracting [> ] 65.54kB/3.398MB 13:42:30 31e352740f53 Extracting [> ] 65.54kB/3.398MB 13:42:30 31e352740f53 Extracting [> ] 65.54kB/3.398MB 13:42:30 31e352740f53 Extracting [> ] 65.54kB/3.398MB 13:42:30 31e352740f53 Extracting [> ] 65.54kB/3.398MB 13:42:30 78f39bed0e83 Downloading [==================================================>] 1.077kB/1.077kB 13:42:30 78f39bed0e83 Verifying Checksum 13:42:30 78f39bed0e83 Download complete 13:42:30 40796999d308 Downloading [============================> ] 3.003kB/5.325kB 13:42:30 40796999d308 Downloading [==================================================>] 5.325kB/5.325kB 13:42:30 40796999d308 Verifying Checksum 13:42:30 40796999d308 Download complete 13:42:30 14ddc757aae0 Downloading [============================> ] 3.003kB/5.314kB 13:42:30 14ddc757aae0 Download complete 13:42:30 ebe1cd824584 Downloading [==================================================>] 1.037kB/1.037kB 13:42:30 ebe1cd824584 Verifying Checksum 13:42:30 ebe1cd824584 Download complete 13:42:30 7138254c3790 Downloading [==================> ] 12.04MB/32.98MB 13:42:30 57703e441b07 Downloading [========> ] 12.43MB/73.93MB 13:42:30 d2893dc6732f Downloading [==================================================>] 1.038kB/1.038kB 13:42:30 d2893dc6732f Download complete 13:42:30 a23a963fcebe Downloading [==========> ] 3.002kB/13.9kB 13:42:30 a23a963fcebe Download complete 13:42:30 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 13:42:30 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 13:42:30 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 13:42:30 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 13:42:30 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 13:42:30 369dfa39565e Downloading [==========> ] 3.002kB/13.79kB 13:42:30 369dfa39565e Verifying Checksum 13:42:30 369dfa39565e Download complete 13:42:30 9146eb587aa8 Downloading [==================================================>] 2.856kB/2.856kB 13:42:30 9146eb587aa8 Verifying Checksum 13:42:30 9146eb587aa8 Download complete 13:42:30 a120f6888c1f Downloading [==================================================>] 2.864kB/2.864kB 13:42:30 a120f6888c1f Verifying Checksum 13:42:30 a120f6888c1f Download complete 13:42:30 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 13:42:30 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 13:42:30 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 13:42:30 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 13:42:30 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 13:42:30 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 13:42:30 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 13:42:30 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 13:42:30 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 13:42:30 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 13:42:30 7138254c3790 Downloading [=======================================> ] 26.15MB/32.98MB 13:42:30 57703e441b07 Downloading [===================> ] 29.2MB/73.93MB 13:42:30 21c7cf7066d0 Downloading [> ] 539.6kB/73.93MB 13:42:30 21c7cf7066d0 Downloading [> ] 539.6kB/73.93MB 13:42:30 7138254c3790 Verifying Checksum 13:42:30 7138254c3790 Download complete 13:42:30 eb5e31f0ecf8 Downloading [==================================================>] 305B/305B 13:42:30 eb5e31f0ecf8 Verifying Checksum 13:42:30 eb5e31f0ecf8 Download complete 13:42:30 31e352740f53 Pull complete 13:42:30 31e352740f53 Pull complete 13:42:30 31e352740f53 Pull complete 13:42:30 31e352740f53 Pull complete 13:42:30 31e352740f53 Pull complete 13:42:30 22ebf0e44c85 Pulling fs layer 13:42:30 00b33c871d26 Pulling fs layer 13:42:30 6b11e56702ad Pulling fs layer 13:42:30 53d69aa7d3fc Pulling fs layer 13:42:30 a3ab11953ef9 Pulling fs layer 13:42:30 91ef9543149d Pulling fs layer 13:42:30 2ec4f59af178 Pulling fs layer 13:42:30 00b33c871d26 Waiting 13:42:30 6b11e56702ad Waiting 13:42:30 53d69aa7d3fc Waiting 13:42:30 22ebf0e44c85 Waiting 13:42:30 a3ab11953ef9 Waiting 13:42:30 91ef9543149d Waiting 13:42:30 8b7e81cd5ef1 Pulling fs layer 13:42:30 c52916c1316e Pulling fs layer 13:42:30 d93f69e96600 Pulling fs layer 13:42:30 2ec4f59af178 Waiting 13:42:30 8b7e81cd5ef1 Waiting 13:42:30 bbb9d15c45a1 Pulling fs layer 13:42:30 bbb9d15c45a1 Waiting 13:42:30 4faab25371b2 Downloading [> ] 539.6kB/158.6MB 13:42:30 22ebf0e44c85 Pulling fs layer 13:42:30 00b33c871d26 Pulling fs layer 13:42:30 6b11e56702ad Pulling fs layer 13:42:30 53d69aa7d3fc Pulling fs layer 13:42:30 a3ab11953ef9 Pulling fs layer 13:42:30 91ef9543149d Pulling fs layer 13:42:30 2ec4f59af178 Pulling fs layer 13:42:30 8b7e81cd5ef1 Pulling fs layer 13:42:30 c52916c1316e Pulling fs layer 13:42:30 7a1cb9ad7f75 Pulling fs layer 13:42:30 0a92c7dea7af Pulling fs layer 13:42:30 6b11e56702ad Waiting 13:42:30 8b7e81cd5ef1 Waiting 13:42:30 91ef9543149d Waiting 13:42:30 a3ab11953ef9 Waiting 13:42:30 2ec4f59af178 Waiting 13:42:30 22ebf0e44c85 Waiting 13:42:30 53d69aa7d3fc Waiting 13:42:30 00b33c871d26 Waiting 13:42:30 c52916c1316e Waiting 13:42:30 7a1cb9ad7f75 Waiting 13:42:30 57703e441b07 Downloading [=============================> ] 44.33MB/73.93MB 13:42:31 21c7cf7066d0 Downloading [=====> ] 7.568MB/73.93MB 13:42:31 21c7cf7066d0 Downloading [=====> ] 7.568MB/73.93MB 13:42:31 4faab25371b2 Downloading [=> ] 4.324MB/158.6MB 13:42:31 57703e441b07 Downloading [=======================================> ] 58.93MB/73.93MB 13:42:31 21c7cf7066d0 Downloading [=========> ] 14.06MB/73.93MB 13:42:31 21c7cf7066d0 Downloading [=========> ] 14.06MB/73.93MB 13:42:31 4faab25371b2 Downloading [===> ] 10.81MB/158.6MB 13:42:31 57703e441b07 Downloading [=================================================> ] 73.53MB/73.93MB 13:42:31 57703e441b07 Verifying Checksum 13:42:31 57703e441b07 Download complete 13:42:31 21c7cf7066d0 Downloading [=============> ] 20MB/73.93MB 13:42:31 21c7cf7066d0 Downloading [=============> ] 20MB/73.93MB 13:42:31 6b867d96d427 Downloading [==================================================>] 1.153kB/1.153kB 13:42:31 6b867d96d427 Verifying Checksum 13:42:31 6b867d96d427 Download complete 13:42:31 93832cc54357 Downloading [==================================================>] 1.127kB/1.127kB 13:42:31 93832cc54357 Download complete 13:42:31 e8bf24a82546 Downloading [> ] 539.6kB/180.3MB 13:42:31 4faab25371b2 Downloading [=====> ] 17.3MB/158.6MB 13:42:31 21c7cf7066d0 Downloading [===================> ] 28.11MB/73.93MB 13:42:31 21c7cf7066d0 Downloading [===================> ] 28.11MB/73.93MB 13:42:31 57703e441b07 Extracting [> ] 557.1kB/73.93MB 13:42:31 e8bf24a82546 Downloading [=> ] 5.406MB/180.3MB 13:42:31 4faab25371b2 Downloading [=======> ] 23.79MB/158.6MB 13:42:31 21c7cf7066d0 Downloading [=======================> ] 34.6MB/73.93MB 13:42:31 21c7cf7066d0 Downloading [=======================> ] 34.6MB/73.93MB 13:42:31 57703e441b07 Extracting [===> ] 5.571MB/73.93MB 13:42:31 e8bf24a82546 Downloading [===> ] 11.89MB/180.3MB 13:42:31 4faab25371b2 Downloading [=========> ] 30.28MB/158.6MB 13:42:31 21c7cf7066d0 Downloading [===========================> ] 41.09MB/73.93MB 13:42:31 21c7cf7066d0 Downloading [===========================> ] 41.09MB/73.93MB 13:42:31 57703e441b07 Extracting [========> ] 12.26MB/73.93MB 13:42:31 e8bf24a82546 Downloading [=====> ] 18.38MB/180.3MB 13:42:31 4faab25371b2 Downloading [===========> ] 36.76MB/158.6MB 13:42:31 21c7cf7066d0 Downloading [================================> ] 47.58MB/73.93MB 13:42:31 21c7cf7066d0 Downloading [================================> ] 47.58MB/73.93MB 13:42:31 57703e441b07 Extracting [============> ] 18.94MB/73.93MB 13:42:31 e8bf24a82546 Downloading [======> ] 24.33MB/180.3MB 13:42:31 4faab25371b2 Downloading [=============> ] 43.25MB/158.6MB 13:42:31 21c7cf7066d0 Downloading [=====================================> ] 55.15MB/73.93MB 13:42:31 21c7cf7066d0 Downloading [=====================================> ] 55.15MB/73.93MB 13:42:31 57703e441b07 Extracting [=================> ] 25.62MB/73.93MB 13:42:31 e8bf24a82546 Downloading [========> ] 31.9MB/180.3MB 13:42:31 4faab25371b2 Downloading [================> ] 50.82MB/158.6MB 13:42:31 21c7cf7066d0 Downloading [=========================================> ] 61.64MB/73.93MB 13:42:31 21c7cf7066d0 Downloading [=========================================> ] 61.64MB/73.93MB 13:42:31 57703e441b07 Extracting [=====================> ] 31.75MB/73.93MB 13:42:31 e8bf24a82546 Downloading [==========> ] 38.39MB/180.3MB 13:42:31 4faab25371b2 Downloading [==================> ] 57.31MB/158.6MB 13:42:31 21c7cf7066d0 Downloading [==============================================> ] 68.12MB/73.93MB 13:42:31 21c7cf7066d0 Downloading [==============================================> ] 68.12MB/73.93MB 13:42:31 57703e441b07 Extracting [=========================> ] 37.32MB/73.93MB 13:42:32 e8bf24a82546 Downloading [============> ] 44.33MB/180.3MB 13:42:32 4faab25371b2 Downloading [====================> ] 63.8MB/158.6MB 13:42:32 21c7cf7066d0 Download complete 13:42:32 21c7cf7066d0 Verifying Checksum 13:42:32 21c7cf7066d0 Download complete 13:42:32 154b803e2d93 Downloading [=> ] 3.002kB/84.13kB 13:42:32 154b803e2d93 Downloading [==================================================>] 84.13kB/84.13kB 13:42:32 154b803e2d93 Verifying Checksum 13:42:32 154b803e2d93 Download complete 13:42:32 57703e441b07 Extracting [============================> ] 41.78MB/73.93MB 13:42:32 e4305231c991 Download complete 13:42:32 e8bf24a82546 Downloading [==============> ] 51.9MB/180.3MB 13:42:32 f469048fbe8d Downloading [==================================================>] 92B/92B 13:42:32 f469048fbe8d Verifying Checksum 13:42:32 f469048fbe8d Download complete 13:42:32 4faab25371b2 Downloading [======================> ] 71.37MB/158.6MB 13:42:32 c189e028fabb Downloading [==================================================>] 300B/300B 13:42:32 c189e028fabb Verifying Checksum 13:42:32 c189e028fabb Download complete 13:42:32 21c7cf7066d0 Extracting [> ] 557.1kB/73.93MB 13:42:32 21c7cf7066d0 Extracting [> ] 557.1kB/73.93MB 13:42:32 c9bd119720e4 Downloading [> ] 539.6kB/246.3MB 13:42:32 57703e441b07 Extracting [===============================> ] 46.24MB/73.93MB 13:42:32 e8bf24a82546 Downloading [================> ] 59.47MB/180.3MB 13:42:32 4faab25371b2 Downloading [========================> ] 78.4MB/158.6MB 13:42:32 21c7cf7066d0 Extracting [===> ] 4.456MB/73.93MB 13:42:32 21c7cf7066d0 Extracting [===> ] 4.456MB/73.93MB 13:42:32 c9bd119720e4 Downloading [=> ] 5.946MB/246.3MB 13:42:32 57703e441b07 Extracting [===================================> ] 51.81MB/73.93MB 13:42:32 e8bf24a82546 Downloading [==================> ] 65.96MB/180.3MB 13:42:32 4faab25371b2 Downloading [==========================> ] 84.88MB/158.6MB 13:42:32 21c7cf7066d0 Extracting [=======> ] 11.14MB/73.93MB 13:42:32 21c7cf7066d0 Extracting [=======> ] 11.14MB/73.93MB 13:42:32 c9bd119720e4 Downloading [==> ] 11.89MB/246.3MB 13:42:32 57703e441b07 Extracting [=====================================> ] 55.71MB/73.93MB 13:42:32 e8bf24a82546 Downloading [====================> ] 74.61MB/180.3MB 13:42:32 4faab25371b2 Downloading [=============================> ] 93.54MB/158.6MB 13:42:32 21c7cf7066d0 Extracting [==========> ] 16.15MB/73.93MB 13:42:32 21c7cf7066d0 Extracting [==========> ] 16.15MB/73.93MB 13:42:32 c9bd119720e4 Downloading [===> ] 17.3MB/246.3MB 13:42:32 57703e441b07 Extracting [========================================> ] 59.6MB/73.93MB 13:42:32 e8bf24a82546 Downloading [======================> ] 82.72MB/180.3MB 13:42:32 4faab25371b2 Downloading [================================> ] 101.6MB/158.6MB 13:42:32 21c7cf7066d0 Extracting [===============> ] 22.28MB/73.93MB 13:42:32 21c7cf7066d0 Extracting [===============> ] 22.28MB/73.93MB 13:42:32 c9bd119720e4 Downloading [=====> ] 25.41MB/246.3MB 13:42:32 57703e441b07 Extracting [============================================> ] 65.18MB/73.93MB 13:42:32 e8bf24a82546 Downloading [=========================> ] 90.29MB/180.3MB 13:42:32 4faab25371b2 Downloading [==================================> ] 108.1MB/158.6MB 13:42:32 21c7cf7066d0 Extracting [===================> ] 28.97MB/73.93MB 13:42:32 21c7cf7066d0 Extracting [===================> ] 28.97MB/73.93MB 13:42:32 c9bd119720e4 Downloading [======> ] 31.36MB/246.3MB 13:42:32 57703e441b07 Extracting [================================================> ] 71.86MB/73.93MB 13:42:32 e8bf24a82546 Downloading [==========================> ] 95.7MB/180.3MB 13:42:32 4faab25371b2 Downloading [===================================> ] 114.1MB/158.6MB 13:42:32 21c7cf7066d0 Extracting [======================> ] 33.42MB/73.93MB 13:42:32 21c7cf7066d0 Extracting [======================> ] 33.42MB/73.93MB 13:42:32 57703e441b07 Extracting [==================================================>] 73.93MB/73.93MB 13:42:32 c9bd119720e4 Downloading [=======> ] 37.85MB/246.3MB 13:42:32 e8bf24a82546 Downloading [============================> ] 103.3MB/180.3MB 13:42:32 4faab25371b2 Downloading [======================================> ] 121.1MB/158.6MB 13:42:32 21c7cf7066d0 Extracting [=========================> ] 37.88MB/73.93MB 13:42:32 21c7cf7066d0 Extracting [=========================> ] 37.88MB/73.93MB 13:42:32 57703e441b07 Pull complete 13:42:32 c9bd119720e4 Downloading [=========> ] 44.87MB/246.3MB 13:42:32 e8bf24a82546 Downloading [==============================> ] 110.8MB/180.3MB 13:42:32 4faab25371b2 Downloading [========================================> ] 128.7MB/158.6MB 13:42:32 7138254c3790 Extracting [> ] 360.4kB/32.98MB 13:42:32 21c7cf7066d0 Extracting [=============================> ] 42.89MB/73.93MB 13:42:32 21c7cf7066d0 Extracting [=============================> ] 42.89MB/73.93MB 13:42:33 c9bd119720e4 Downloading [==========> ] 52.98MB/246.3MB 13:42:33 e8bf24a82546 Downloading [================================> ] 117.3MB/180.3MB 13:42:33 4faab25371b2 Downloading [==========================================> ] 135.7MB/158.6MB 13:42:33 7138254c3790 Extracting [======> ] 3.965MB/32.98MB 13:42:33 21c7cf7066d0 Extracting [================================> ] 47.91MB/73.93MB 13:42:33 21c7cf7066d0 Extracting [================================> ] 47.91MB/73.93MB 13:42:33 c9bd119720e4 Downloading [============> ] 59.47MB/246.3MB 13:42:33 e8bf24a82546 Downloading [==================================> ] 123.3MB/180.3MB 13:42:33 4faab25371b2 Downloading [============================================> ] 141.1MB/158.6MB 13:42:33 7138254c3790 Extracting [============> ] 7.93MB/32.98MB 13:42:33 21c7cf7066d0 Extracting [===================================> ] 52.36MB/73.93MB 13:42:33 21c7cf7066d0 Extracting [===================================> ] 52.36MB/73.93MB 13:42:33 c9bd119720e4 Downloading [=============> ] 65.42MB/246.3MB 13:42:33 e8bf24a82546 Downloading [===================================> ] 129.2MB/180.3MB 13:42:33 4faab25371b2 Downloading [==============================================> ] 147.1MB/158.6MB 13:42:33 7138254c3790 Extracting [====================> ] 13.34MB/32.98MB 13:42:33 21c7cf7066d0 Extracting [======================================> ] 56.26MB/73.93MB 13:42:33 21c7cf7066d0 Extracting [======================================> ] 56.26MB/73.93MB 13:42:33 c9bd119720e4 Downloading [==============> ] 71.91MB/246.3MB 13:42:33 e8bf24a82546 Downloading [=====================================> ] 136.2MB/180.3MB 13:42:33 4faab25371b2 Downloading [================================================> ] 153MB/158.6MB 13:42:33 7138254c3790 Extracting [=========================> ] 16.58MB/32.98MB 13:42:33 21c7cf7066d0 Extracting [========================================> ] 60.16MB/73.93MB 13:42:33 21c7cf7066d0 Extracting [========================================> ] 60.16MB/73.93MB 13:42:33 c9bd119720e4 Downloading [================> ] 80.56MB/246.3MB 13:42:33 4faab25371b2 Verifying Checksum 13:42:33 4faab25371b2 Download complete 13:42:33 e8bf24a82546 Downloading [=======================================> ] 143.8MB/180.3MB 13:42:33 257d54e26411 Downloading [> ] 539.6kB/73.93MB 13:42:33 7138254c3790 Extracting [==============================> ] 20.19MB/32.98MB 13:42:33 21c7cf7066d0 Extracting [============================================> ] 65.73MB/73.93MB 13:42:33 21c7cf7066d0 Extracting [============================================> ] 65.73MB/73.93MB 13:42:33 c9bd119720e4 Downloading [=================> ] 87.59MB/246.3MB 13:42:33 e8bf24a82546 Downloading [=========================================> ] 150.3MB/180.3MB 13:42:33 257d54e26411 Downloading [====> ] 6.487MB/73.93MB 13:42:33 7138254c3790 Extracting [=================================> ] 22.35MB/32.98MB 13:42:33 21c7cf7066d0 Extracting [===============================================> ] 70.75MB/73.93MB 13:42:33 21c7cf7066d0 Extracting [===============================================> ] 70.75MB/73.93MB 13:42:33 c9bd119720e4 Downloading [===================> ] 94.08MB/246.3MB 13:42:33 e8bf24a82546 Downloading [===========================================> ] 157.3MB/180.3MB 13:42:33 257d54e26411 Downloading [========> ] 12.43MB/73.93MB 13:42:33 7138254c3790 Extracting [====================================> ] 23.79MB/32.98MB 13:42:33 21c7cf7066d0 Extracting [==================================================>] 73.93MB/73.93MB 13:42:33 21c7cf7066d0 Extracting [==================================================>] 73.93MB/73.93MB 13:42:33 c9bd119720e4 Downloading [====================> ] 102.2MB/246.3MB 13:42:33 e8bf24a82546 Downloading [=============================================> ] 164.9MB/180.3MB 13:42:33 21c7cf7066d0 Pull complete 13:42:33 21c7cf7066d0 Pull complete 13:42:33 eb5e31f0ecf8 Extracting [==================================================>] 305B/305B 13:42:33 eb5e31f0ecf8 Extracting [==================================================>] 305B/305B 13:42:33 257d54e26411 Downloading [=============> ] 19.46MB/73.93MB 13:42:33 7138254c3790 Extracting [=======================================> ] 26.31MB/32.98MB 13:42:33 c9bd119720e4 Downloading [======================> ] 110.3MB/246.3MB 13:42:33 e8bf24a82546 Downloading [===============================================> ] 171.9MB/180.3MB 13:42:33 257d54e26411 Downloading [=================> ] 25.95MB/73.93MB 13:42:33 7138254c3790 Extracting [=========================================> ] 27.39MB/32.98MB 13:42:33 eb5e31f0ecf8 Pull complete 13:42:33 c9bd119720e4 Downloading [=======================> ] 116.8MB/246.3MB 13:42:33 e8bf24a82546 Downloading [=================================================> ] 179MB/180.3MB 13:42:33 e8bf24a82546 Verifying Checksum 13:42:34 e8bf24a82546 Download complete 13:42:34 215302b53935 Downloading [==================================================>] 293B/293B 13:42:34 215302b53935 Download complete 13:42:34 257d54e26411 Downloading [======================> ] 32.98MB/73.93MB 13:42:34 4faab25371b2 Extracting [> ] 557.1kB/158.6MB 13:42:34 eb2f448c7730 Downloading [=> ] 3.001kB/127kB 13:42:34 c9bd119720e4 Downloading [=========================> ] 123.8MB/246.3MB 13:42:34 eb2f448c7730 Downloading [==================================================>] 127kB/127kB 13:42:34 eb2f448c7730 Verifying Checksum 13:42:34 eb2f448c7730 Download complete 13:42:34 7138254c3790 Extracting [=============================================> ] 30.28MB/32.98MB 13:42:34 c8ee90c58894 Downloading [==================================================>] 1.329kB/1.329kB 13:42:34 c8ee90c58894 Verifying Checksum 13:42:34 c8ee90c58894 Download complete 13:42:34 e30cdb86c4f0 Downloading [> ] 539.6kB/98.32MB 13:42:34 4faab25371b2 Extracting [====> ] 13.93MB/158.6MB 13:42:34 e8bf24a82546 Extracting [> ] 557.1kB/180.3MB 13:42:34 c9bd119720e4 Downloading [==========================> ] 130.3MB/246.3MB 13:42:34 257d54e26411 Downloading [===========================> ] 40.55MB/73.93MB 13:42:34 7138254c3790 Extracting [================================================> ] 31.72MB/32.98MB 13:42:34 7138254c3790 Extracting [==================================================>] 32.98MB/32.98MB 13:42:34 e30cdb86c4f0 Downloading [===> ] 5.946MB/98.32MB 13:42:34 4faab25371b2 Extracting [========> ] 26.74MB/158.6MB 13:42:34 e8bf24a82546 Extracting [=> ] 4.456MB/180.3MB 13:42:34 e30cdb86c4f0 Downloading [===> ] 7.568MB/98.32MB 13:42:34 c9bd119720e4 Downloading [============================> ] 139MB/246.3MB 13:42:34 257d54e26411 Downloading [==================================> ] 50.82MB/73.93MB 13:42:34 e8bf24a82546 Extracting [=> ] 5.014MB/180.3MB 13:42:34 4faab25371b2 Extracting [========> ] 28.41MB/158.6MB 13:42:34 4faab25371b2 Extracting [==========> ] 33.98MB/158.6MB 13:42:34 257d54e26411 Downloading [==========================================> ] 62.72MB/73.93MB 13:42:34 e8bf24a82546 Extracting [====> ] 16.15MB/180.3MB 13:42:34 c9bd119720e4 Downloading [==============================> ] 150.8MB/246.3MB 13:42:34 e30cdb86c4f0 Downloading [=========> ] 17.84MB/98.32MB 13:42:34 257d54e26411 Download complete 13:42:34 e30cdb86c4f0 Downloading [===============> ] 30.28MB/98.32MB 13:42:34 4faab25371b2 Extracting [==============> ] 46.24MB/158.6MB 13:42:34 e8bf24a82546 Extracting [========> ] 28.97MB/180.3MB 13:42:34 c9bd119720e4 Downloading [=================================> ] 163.8MB/246.3MB 13:42:34 c990b7e46fc8 Downloading [==================================================>] 1.299kB/1.299kB 13:42:34 c990b7e46fc8 Verifying Checksum 13:42:34 c990b7e46fc8 Download complete 13:42:34 4faab25371b2 Extracting [===============> ] 49.02MB/158.6MB 13:42:34 e30cdb86c4f0 Downloading [==================> ] 37.31MB/98.32MB 13:42:34 e8bf24a82546 Extracting [===========> ] 40.11MB/180.3MB 13:42:34 c9bd119720e4 Downloading [===================================> ] 172.5MB/246.3MB 13:42:34 c3cc5e3d19ac Downloading [==================================================>] 296B/296B 13:42:34 c3cc5e3d19ac Verifying Checksum 13:42:34 c3cc5e3d19ac Download complete 13:42:34 c3cc5e3d19ac Extracting [==================================================>] 296B/296B 13:42:34 c3cc5e3d19ac Extracting [==================================================>] 296B/296B 13:42:34 0d2280d71230 Downloading [=> ] 3.001kB/127.4kB 13:42:34 0d2280d71230 Downloading [==================================================>] 127.4kB/127.4kB 13:42:34 0d2280d71230 Verifying Checksum 13:42:34 984932e12fb0 Downloading [==================================================>] 1.147kB/1.147kB 13:42:34 4faab25371b2 Extracting [=================> ] 54.59MB/158.6MB 13:42:34 e8bf24a82546 Extracting [============> ] 45.68MB/180.3MB 13:42:34 984932e12fb0 Verifying Checksum 13:42:34 984932e12fb0 Download complete 13:42:34 c9bd119720e4 Downloading [====================================> ] 179MB/246.3MB 13:42:34 257d54e26411 Extracting [> ] 557.1kB/73.93MB 13:42:34 e30cdb86c4f0 Downloading [=====================> ] 43.25MB/98.32MB 13:42:34 5687ac571232 Downloading [> ] 539.6kB/91.54MB 13:42:34 4faab25371b2 Extracting [===================> ] 60.72MB/158.6MB 13:42:34 e8bf24a82546 Extracting [===============> ] 54.59MB/180.3MB 13:42:34 e30cdb86c4f0 Downloading [============================> ] 56.77MB/98.32MB 13:42:34 c9bd119720e4 Downloading [======================================> ] 187.6MB/246.3MB 13:42:35 5687ac571232 Downloading [===> ] 7.028MB/91.54MB 13:42:35 257d54e26411 Extracting [===> ] 4.456MB/73.93MB 13:42:35 4faab25371b2 Extracting [====================> ] 64.06MB/158.6MB 13:42:35 e8bf24a82546 Extracting [================> ] 61.28MB/180.3MB 13:42:35 e30cdb86c4f0 Downloading [================================> ] 63.26MB/98.32MB 13:42:35 c9bd119720e4 Downloading [=======================================> ] 195.7MB/246.3MB 13:42:35 5687ac571232 Downloading [=======> ] 14.06MB/91.54MB 13:42:35 257d54e26411 Extracting [====> ] 6.685MB/73.93MB 13:42:35 4faab25371b2 Extracting [=======================> ] 74.65MB/158.6MB 13:42:35 c9bd119720e4 Downloading [=========================================> ] 202.8MB/246.3MB 13:42:35 e30cdb86c4f0 Downloading [===================================> ] 70.29MB/98.32MB 13:42:35 e8bf24a82546 Extracting [====================> ] 74.09MB/180.3MB 13:42:35 5687ac571232 Downloading [=========> ] 16.76MB/91.54MB 13:42:35 257d54e26411 Extracting [=====> ] 7.799MB/73.93MB 13:42:35 4faab25371b2 Extracting [========================> ] 76.87MB/158.6MB 13:42:35 c9bd119720e4 Downloading [===========================================> ] 212.5MB/246.3MB 13:42:35 e30cdb86c4f0 Downloading [=========================================> ] 82.18MB/98.32MB 13:42:35 e8bf24a82546 Extracting [======================> ] 80.77MB/180.3MB 13:42:35 5687ac571232 Downloading [==============> ] 26.49MB/91.54MB 13:42:35 257d54e26411 Extracting [=======> ] 11.7MB/73.93MB 13:42:35 4faab25371b2 Extracting [===========================> ] 88.01MB/158.6MB 13:42:35 c9bd119720e4 Downloading [============================================> ] 220.6MB/246.3MB 13:42:35 e30cdb86c4f0 Downloading [=============================================> ] 88.67MB/98.32MB 13:42:35 e8bf24a82546 Extracting [========================> ] 87.46MB/180.3MB 13:42:35 5687ac571232 Downloading [===================> ] 35.14MB/91.54MB 13:42:35 257d54e26411 Extracting [==========> ] 16.15MB/73.93MB 13:42:35 4faab25371b2 Extracting [==============================> ] 98.04MB/158.6MB 13:42:35 7138254c3790 Pull complete 13:42:35 78f39bed0e83 Extracting [==================================================>] 1.077kB/1.077kB 13:42:35 5687ac571232 Downloading [=====================> ] 39.47MB/91.54MB 13:42:35 c9bd119720e4 Downloading [==============================================> ] 227.1MB/246.3MB 13:42:35 78f39bed0e83 Extracting [==================================================>] 1.077kB/1.077kB 13:42:35 257d54e26411 Extracting [============> ] 18.38MB/73.93MB 13:42:35 4faab25371b2 Extracting [================================> ] 102.5MB/158.6MB 13:42:35 e8bf24a82546 Extracting [=========================> ] 91.36MB/180.3MB 13:42:35 e30cdb86c4f0 Downloading [================================================> ] 95.16MB/98.32MB 13:42:35 e30cdb86c4f0 Verifying Checksum 13:42:35 e30cdb86c4f0 Download complete 13:42:35 deac262509a5 Downloading [==================================================>] 1.118kB/1.118kB 13:42:35 deac262509a5 Verifying Checksum 13:42:35 deac262509a5 Download complete 13:42:35 e8bf24a82546 Extracting [==========================> ] 94.14MB/180.3MB 13:42:35 4faab25371b2 Extracting [==================================> ] 108.6MB/158.6MB 13:42:35 257d54e26411 Extracting [=============> ] 20.05MB/73.93MB 13:42:35 c3cc5e3d19ac Pull complete 13:42:35 9fa9226be034 Downloading [> ] 15.3kB/783kB 13:42:35 0d2280d71230 Extracting [============> ] 32.77kB/127.4kB 13:42:35 c9bd119720e4 Downloading [===============================================> ] 235.7MB/246.3MB 13:42:35 5687ac571232 Downloading [===========================> ] 50.28MB/91.54MB 13:42:35 0d2280d71230 Extracting [==================================================>] 127.4kB/127.4kB 13:42:35 0d2280d71230 Extracting [==================================================>] 127.4kB/127.4kB 13:42:35 e8bf24a82546 Extracting [==========================> ] 94.7MB/180.3MB 13:42:35 257d54e26411 Extracting [=============> ] 20.61MB/73.93MB 13:42:35 4faab25371b2 Extracting [==================================> ] 109.7MB/158.6MB 13:42:35 9fa9226be034 Downloading [==================================================>] 783kB/783kB 13:42:35 9fa9226be034 Download complete 13:42:35 9fa9226be034 Extracting [==> ] 32.77kB/783kB 13:42:36 c9bd119720e4 Downloading [=================================================> ] 245.5MB/246.3MB 13:42:36 5687ac571232 Downloading [===============================> ] 57.85MB/91.54MB 13:42:36 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 13:42:36 4faab25371b2 Extracting [===================================> ] 112.5MB/158.6MB 13:42:36 c9bd119720e4 Verifying Checksum 13:42:36 c9bd119720e4 Download complete 13:42:36 e8bf24a82546 Extracting [==========================> ] 96.93MB/180.3MB 13:42:36 78f39bed0e83 Pull complete 13:42:36 1617e25568b2 Download complete 13:42:36 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 13:42:36 40796999d308 Extracting [==================================================>] 5.325kB/5.325kB 13:42:36 40796999d308 Extracting [==================================================>] 5.325kB/5.325kB 13:42:36 0d2280d71230 Pull complete 13:42:36 257d54e26411 Extracting [================> ] 23.95MB/73.93MB 13:42:36 984932e12fb0 Extracting [==================================================>] 1.147kB/1.147kB 13:42:36 984932e12fb0 Extracting [==================================================>] 1.147kB/1.147kB 13:42:36 9fa9226be034 Extracting [==================================================>] 783kB/783kB 13:42:36 3ecda1bfd07b Downloading [> ] 539.6kB/55.21MB 13:42:36 5687ac571232 Downloading [==================================> ] 63.26MB/91.54MB 13:42:36 ac9f4de4b762 Downloading [> ] 506.8kB/50.13MB 13:42:36 4faab25371b2 Extracting [=====================================> ] 118.1MB/158.6MB 13:42:36 e8bf24a82546 Extracting [===========================> ] 98.6MB/180.3MB 13:42:36 3ecda1bfd07b Downloading [=> ] 2.162MB/55.21MB 13:42:36 4faab25371b2 Extracting [=====================================> ] 119.2MB/158.6MB 13:42:36 ac9f4de4b762 Downloading [=> ] 1.523MB/50.13MB 13:42:36 5687ac571232 Downloading [=====================================> ] 69.2MB/91.54MB 13:42:36 9fa9226be034 Pull complete 13:42:36 257d54e26411 Extracting [=================> ] 25.62MB/73.93MB 13:42:36 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 13:42:36 e8bf24a82546 Extracting [===========================> ] 99.16MB/180.3MB 13:42:36 3ecda1bfd07b Downloading [======> ] 7.028MB/55.21MB 13:42:36 ac9f4de4b762 Downloading [=====> ] 5.586MB/50.13MB 13:42:36 5687ac571232 Downloading [=============================================> ] 83.26MB/91.54MB 13:42:36 4faab25371b2 Extracting [=======================================> ] 125.3MB/158.6MB 13:42:36 257d54e26411 Extracting [===================> ] 28.97MB/73.93MB 13:42:36 1617e25568b2 Extracting [========================================> ] 393.2kB/480.9kB 13:42:36 e8bf24a82546 Extracting [============================> ] 102.5MB/180.3MB 13:42:36 40796999d308 Pull complete 13:42:36 14ddc757aae0 Extracting [==================================================>] 5.314kB/5.314kB 13:42:36 14ddc757aae0 Extracting [==================================================>] 5.314kB/5.314kB 13:42:36 984932e12fb0 Pull complete 13:42:36 ac9f4de4b762 Downloading [=============> ] 13.2MB/50.13MB 13:42:36 3ecda1bfd07b Downloading [==============> ] 16.22MB/55.21MB 13:42:36 5687ac571232 Downloading [================================================> ] 88.67MB/91.54MB 13:42:36 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 13:42:36 4faab25371b2 Extracting [========================================> ] 129.8MB/158.6MB 13:42:36 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 13:42:36 257d54e26411 Extracting [=====================> ] 32.31MB/73.93MB 13:42:36 e8bf24a82546 Extracting [=============================> ] 106.4MB/180.3MB 13:42:36 5687ac571232 Download complete 13:42:36 ea63b2e6315f Downloading [==================================================>] 605B/605B 13:42:36 ea63b2e6315f Verifying Checksum 13:42:36 ea63b2e6315f Download complete 13:42:36 3ecda1bfd07b Downloading [======================> ] 24.87MB/55.21MB 13:42:36 ac9f4de4b762 Downloading [=====================> ] 21.84MB/50.13MB 13:42:36 fbd390d3bd00 Downloading [==================================================>] 2.675kB/2.675kB 13:42:36 fbd390d3bd00 Verifying Checksum 13:42:36 fbd390d3bd00 Download complete 13:42:36 4faab25371b2 Extracting [===========================================> ] 137.6MB/158.6MB 13:42:36 9b1ac15ef728 Downloading [================================================> ] 3.011kB/3.087kB 13:42:36 9b1ac15ef728 Download complete 13:42:36 257d54e26411 Extracting [========================> ] 35.65MB/73.93MB 13:42:36 14ddc757aae0 Pull complete 13:42:36 1617e25568b2 Pull complete 13:42:36 e8bf24a82546 Extracting [==============================> ] 110.3MB/180.3MB 13:42:36 ebe1cd824584 Extracting [==================================================>] 1.037kB/1.037kB 13:42:36 ebe1cd824584 Extracting [==================================================>] 1.037kB/1.037kB 13:42:36 8682f304eb80 Downloading [=====================================> ] 3.011kB/4.023kB 13:42:36 8682f304eb80 Downloading [==================================================>] 4.023kB/4.023kB 13:42:36 8682f304eb80 Verifying Checksum 13:42:36 8682f304eb80 Download complete 13:42:36 3ecda1bfd07b Downloading [=========================> ] 28.11MB/55.21MB 13:42:36 ac9f4de4b762 Downloading [=========================> ] 25.39MB/50.13MB 13:42:36 4faab25371b2 Extracting [============================================> ] 142MB/158.6MB 13:42:36 5687ac571232 Extracting [> ] 557.1kB/91.54MB 13:42:36 5fbafe078afc Downloading [==================================================>] 1.44kB/1.44kB 13:42:36 5fbafe078afc Verifying Checksum 13:42:36 5fbafe078afc Download complete 13:42:36 7fb53fd2ae10 Downloading [=> ] 3.009kB/138kB 13:42:36 7fb53fd2ae10 Downloading [==================================================>] 138kB/138kB 13:42:36 7fb53fd2ae10 Verifying Checksum 13:42:36 7fb53fd2ae10 Download complete 13:42:36 592798bd3683 Downloading [==================================================>] 100B/100B 13:42:36 592798bd3683 Verifying Checksum 13:42:36 592798bd3683 Download complete 13:42:36 257d54e26411 Extracting [==========================> ] 38.99MB/73.93MB 13:42:36 473fdc983780 Download complete 13:42:36 5687ac571232 Extracting [===> ] 7.242MB/91.54MB 13:42:36 3ecda1bfd07b Downloading [==================================> ] 38.39MB/55.21MB 13:42:36 ac9f4de4b762 Downloading [================================> ] 33.01MB/50.13MB 13:42:36 e8bf24a82546 Extracting [===============================> ] 114.2MB/180.3MB 13:42:36 4faab25371b2 Extracting [===============================================> ] 149.3MB/158.6MB 13:42:36 4abcf2066143 Downloading [> ] 48.06kB/3.409MB 13:42:36 ebe1cd824584 Pull complete 13:42:36 d2893dc6732f Extracting [==================================================>] 1.038kB/1.038kB 13:42:36 d2893dc6732f Extracting [==================================================>] 1.038kB/1.038kB 13:42:36 257d54e26411 Extracting [============================> ] 42.34MB/73.93MB 13:42:36 4faab25371b2 Extracting [=================================================> ] 156MB/158.6MB 13:42:36 ac9f4de4b762 Downloading [==========================================> ] 42.15MB/50.13MB 13:42:36 5687ac571232 Extracting [=======> ] 13.93MB/91.54MB 13:42:36 e8bf24a82546 Extracting [================================> ] 117MB/180.3MB 13:42:36 3ecda1bfd07b Downloading [========================================> ] 44.33MB/55.21MB 13:42:36 4abcf2066143 Verifying Checksum 13:42:36 4abcf2066143 Download complete 13:42:36 4abcf2066143 Extracting [> ] 65.54kB/3.409MB 13:42:36 4faab25371b2 Extracting [==================================================>] 158.6MB/158.6MB 13:42:36 c0e05c86127e Downloading [==================================================>] 141B/141B 13:42:36 c0e05c86127e Verifying Checksum 13:42:36 c0e05c86127e Download complete 13:42:36 706651a94df6 Downloading [> ] 31.68kB/3.162MB 13:42:37 257d54e26411 Extracting [==============================> ] 45.12MB/73.93MB 13:42:37 ac9f4de4b762 Downloading [==============================================> ] 46.73MB/50.13MB 13:42:37 4faab25371b2 Pull complete 13:42:37 e8bf24a82546 Extracting [=================================> ] 119.8MB/180.3MB 13:42:37 6b867d96d427 Extracting [==================================================>] 1.153kB/1.153kB 13:42:37 6b867d96d427 Extracting [==================================================>] 1.153kB/1.153kB 13:42:37 d2893dc6732f Pull complete 13:42:37 5687ac571232 Extracting [============> ] 22.28MB/91.54MB 13:42:37 4abcf2066143 Extracting [=====> ] 393.2kB/3.409MB 13:42:37 3ecda1bfd07b Downloading [============================================> ] 48.66MB/55.21MB 13:42:37 a23a963fcebe Extracting [==================================================>] 13.9kB/13.9kB 13:42:37 a23a963fcebe Extracting [==================================================>] 13.9kB/13.9kB 13:42:37 ac9f4de4b762 Verifying Checksum 13:42:37 ac9f4de4b762 Download complete 13:42:37 706651a94df6 Downloading [===========> ] 719.8kB/3.162MB 13:42:37 33e0a01314cc Downloading [> ] 48.06kB/4.333MB 13:42:37 706651a94df6 Verifying Checksum 13:42:37 706651a94df6 Download complete 13:42:37 5687ac571232 Extracting [===============> ] 27.85MB/91.54MB 13:42:37 3ecda1bfd07b Verifying Checksum 13:42:37 3ecda1bfd07b Download complete 13:42:37 257d54e26411 Extracting [================================> ] 47.35MB/73.93MB 13:42:37 e8bf24a82546 Extracting [=================================> ] 122MB/180.3MB 13:42:37 f8b444c6ff40 Downloading [===> ] 3.01kB/47.97kB 13:42:37 f8b444c6ff40 Download complete 13:42:37 4abcf2066143 Extracting [==================================================>] 3.409MB/3.409MB 13:42:37 e6c38e6d3add Downloading [======> ] 3.01kB/23.82kB 13:42:37 e6c38e6d3add Downloading [==================================================>] 23.82kB/23.82kB 13:42:37 e6c38e6d3add Verifying Checksum 13:42:37 e6c38e6d3add Download complete 13:42:37 6ca01427385e Downloading [> ] 539.6kB/61.48MB 13:42:37 33e0a01314cc Downloading [========================> ] 2.162MB/4.333MB 13:42:37 5687ac571232 Extracting [==================> ] 33.98MB/91.54MB 13:42:37 257d54e26411 Extracting [=================================> ] 50.14MB/73.93MB 13:42:37 e35e8e85e24d Downloading [> ] 506.8kB/50.55MB 13:42:37 e8bf24a82546 Extracting [==================================> ] 124.8MB/180.3MB 13:42:37 4abcf2066143 Pull complete 13:42:37 6b867d96d427 Pull complete 13:42:37 c0e05c86127e Extracting [==================================================>] 141B/141B 13:42:37 c0e05c86127e Extracting [==================================================>] 141B/141B 13:42:37 3ecda1bfd07b Extracting [> ] 557.1kB/55.21MB 13:42:37 6ca01427385e Downloading [=> ] 1.621MB/61.48MB 13:42:37 33e0a01314cc Downloading [================================> ] 2.801MB/4.333MB 13:42:37 a23a963fcebe Pull complete 13:42:37 93832cc54357 Extracting [==================================================>] 1.127kB/1.127kB 13:42:37 93832cc54357 Extracting [==================================================>] 1.127kB/1.127kB 13:42:37 369dfa39565e Extracting [==================================================>] 13.79kB/13.79kB 13:42:37 369dfa39565e Extracting [==================================================>] 13.79kB/13.79kB 13:42:37 33e0a01314cc Verifying Checksum 13:42:37 33e0a01314cc Download complete 13:42:37 5687ac571232 Extracting [====================> ] 38.44MB/91.54MB 13:42:37 d0bef95bc6b2 Downloading [============> ] 3.01kB/11.92kB 13:42:37 d0bef95bc6b2 Downloading [==================================================>] 11.92kB/11.92kB 13:42:37 d0bef95bc6b2 Verifying Checksum 13:42:37 d0bef95bc6b2 Download complete 13:42:37 e35e8e85e24d Downloading [===> ] 3.554MB/50.55MB 13:42:37 257d54e26411 Extracting [===================================> ] 52.36MB/73.93MB 13:42:37 af860903a445 Downloading [==================================================>] 1.226kB/1.226kB 13:42:37 af860903a445 Verifying Checksum 13:42:37 af860903a445 Download complete 13:42:37 e8bf24a82546 Extracting [===================================> ] 126.5MB/180.3MB 13:42:37 3ecda1bfd07b Extracting [==> ] 2.785MB/55.21MB 13:42:37 10ac4908093d Downloading [> ] 310.2kB/30.43MB 13:42:37 6ca01427385e Downloading [=======> ] 9.19MB/61.48MB 13:42:37 5687ac571232 Extracting [=======================> ] 42.34MB/91.54MB 13:42:37 e35e8e85e24d Downloading [=========> ] 9.649MB/50.55MB 13:42:37 c0e05c86127e Pull complete 13:42:37 e8bf24a82546 Extracting [===================================> ] 128.1MB/180.3MB 13:42:37 257d54e26411 Extracting [====================================> ] 54.03MB/73.93MB 13:42:37 706651a94df6 Extracting [> ] 32.77kB/3.162MB 13:42:37 3ecda1bfd07b Extracting [====> ] 4.456MB/55.21MB 13:42:37 10ac4908093d Downloading [==> ] 1.555MB/30.43MB 13:42:37 6ca01427385e Downloading [==========> ] 12.98MB/61.48MB 13:42:37 10ac4908093d Downloading [======> ] 3.734MB/30.43MB 13:42:37 5687ac571232 Extracting [=========================> ] 46.24MB/91.54MB 13:42:37 6ca01427385e Downloading [============> ] 15.68MB/61.48MB 13:42:37 e35e8e85e24d Downloading [==============> ] 14.73MB/50.55MB 13:42:37 257d54e26411 Extracting [=====================================> ] 55.71MB/73.93MB 13:42:37 3ecda1bfd07b Extracting [=====> ] 5.571MB/55.21MB 13:42:37 93832cc54357 Pull complete 13:42:37 e8bf24a82546 Extracting [====================================> ] 129.8MB/180.3MB 13:42:37 369dfa39565e Pull complete 13:42:37 706651a94df6 Extracting [=====> ] 327.7kB/3.162MB 13:42:37 9146eb587aa8 Extracting [==================================================>] 2.856kB/2.856kB 13:42:37 9146eb587aa8 Extracting [==================================================>] 2.856kB/2.856kB 13:42:37 10ac4908093d Downloading [===========> ] 6.847MB/30.43MB 13:42:37 e35e8e85e24d Downloading [=====================> ] 21.84MB/50.55MB 13:42:37 e8bf24a82546 Extracting [====================================> ] 130.4MB/180.3MB 13:42:37 5687ac571232 Extracting [===========================> ] 51.25MB/91.54MB 13:42:37 10ac4908093d Downloading [============> ] 7.47MB/30.43MB 13:42:37 6ca01427385e Downloading [=================> ] 21.63MB/61.48MB 13:42:37 706651a94df6 Extracting [===============> ] 983kB/3.162MB 13:42:37 simulator Pulled 13:42:37 3ecda1bfd07b Extracting [======> ] 7.242MB/55.21MB 13:42:37 257d54e26411 Extracting [=======================================> ] 57.93MB/73.93MB 13:42:37 257d54e26411 Extracting [========================================> ] 60.16MB/73.93MB 13:42:38 10ac4908093d Downloading [==================> ] 11.21MB/30.43MB 13:42:38 6ca01427385e Downloading [======================> ] 28.11MB/61.48MB 13:42:38 e35e8e85e24d Downloading [==============================> ] 30.98MB/50.55MB 13:42:38 3ecda1bfd07b Extracting [========> ] 9.47MB/55.21MB 13:42:38 706651a94df6 Extracting [================================================> ] 3.047MB/3.162MB 13:42:38 5687ac571232 Extracting [==============================> ] 56.26MB/91.54MB 13:42:38 9146eb587aa8 Pull complete 13:42:38 257d54e26411 Extracting [=========================================> ] 60.72MB/73.93MB 13:42:38 e8bf24a82546 Extracting [====================================> ] 133.1MB/180.3MB 13:42:38 10ac4908093d Downloading [===================> ] 12.14MB/30.43MB 13:42:38 6ca01427385e Downloading [=========================> ] 30.82MB/61.48MB 13:42:38 e35e8e85e24d Downloading [===============================> ] 32MB/50.55MB 13:42:38 5687ac571232 Extracting [===============================> ] 57.38MB/91.54MB 13:42:38 3ecda1bfd07b Extracting [=========> ] 10.03MB/55.21MB 13:42:38 a120f6888c1f Extracting [==================================================>] 2.864kB/2.864kB 13:42:38 a120f6888c1f Extracting [==================================================>] 2.864kB/2.864kB 13:42:38 257d54e26411 Extracting [=========================================> ] 61.28MB/73.93MB 13:42:38 706651a94df6 Extracting [================================================> ] 3.08MB/3.162MB 13:42:38 e8bf24a82546 Extracting [=====================================> ] 133.7MB/180.3MB 13:42:38 10ac4908093d Downloading [===================================> ] 21.79MB/30.43MB 13:42:38 6ca01427385e Downloading [================================> ] 40.01MB/61.48MB 13:42:38 e35e8e85e24d Downloading [===============================================> ] 48.25MB/50.55MB 13:42:38 5687ac571232 Extracting [===================================> ] 64.62MB/91.54MB 13:42:38 e35e8e85e24d Verifying Checksum 13:42:38 e35e8e85e24d Download complete 13:42:38 3ecda1bfd07b Extracting [===========> ] 12.81MB/55.21MB 13:42:38 706651a94df6 Extracting [==================================================>] 3.162MB/3.162MB 13:42:38 44779101e748 Downloading [==================================================>] 1.744kB/1.744kB 13:42:38 44779101e748 Verifying Checksum 13:42:38 44779101e748 Download complete 13:42:38 257d54e26411 Extracting [===========================================> ] 64.62MB/73.93MB 13:42:38 a721db3e3f3d Downloading [> ] 64.45kB/5.526MB 13:42:38 10ac4908093d Downloading [==========================================> ] 26.15MB/30.43MB 13:42:38 a120f6888c1f Pull complete 13:42:38 6ca01427385e Downloading [=======================================> ] 48.12MB/61.48MB 13:42:38 e8bf24a82546 Extracting [=====================================> ] 135.9MB/180.3MB 13:42:38 5687ac571232 Extracting [======================================> ] 70.19MB/91.54MB 13:42:38 policy-db-migrator Pulled 13:42:38 3ecda1bfd07b Extracting [=============> ] 14.48MB/55.21MB 13:42:38 257d54e26411 Extracting [============================================> ] 66.29MB/73.93MB 13:42:38 10ac4908093d Verifying Checksum 13:42:38 10ac4908093d Download complete 13:42:38 a721db3e3f3d Downloading [=====================> ] 2.424MB/5.526MB 13:42:38 1850a929b84a Downloading [==================================================>] 149B/149B 13:42:38 1850a929b84a Verifying Checksum 13:42:38 1850a929b84a Download complete 13:42:38 6ca01427385e Downloading [===============================================> ] 58.93MB/61.48MB 13:42:38 e8bf24a82546 Extracting [======================================> ] 138.7MB/180.3MB 13:42:38 397a918c7da3 Downloading [==================================================>] 327B/327B 13:42:38 397a918c7da3 Verifying Checksum 13:42:38 397a918c7da3 Download complete 13:42:38 5687ac571232 Extracting [=========================================> ] 76.87MB/91.54MB 13:42:38 706651a94df6 Pull complete 13:42:38 33e0a01314cc Extracting [> ] 65.54kB/4.333MB 13:42:38 6ca01427385e Verifying Checksum 13:42:38 6ca01427385e Download complete 13:42:38 3ecda1bfd07b Extracting [===============> ] 17.27MB/55.21MB 13:42:38 806be17e856d Downloading [> ] 539.6kB/89.72MB 13:42:38 634de6c90876 Downloading [===========================================> ] 3.011kB/3.49kB 13:42:38 634de6c90876 Downloading [==================================================>] 3.49kB/3.49kB 13:42:38 634de6c90876 Verifying Checksum 13:42:38 634de6c90876 Download complete 13:42:38 a721db3e3f3d Verifying Checksum 13:42:38 a721db3e3f3d Download complete 13:42:38 257d54e26411 Extracting [===============================================> ] 70.19MB/73.93MB 13:42:38 cd00854cfb1a Downloading [=====================> ] 3.011kB/6.971kB 13:42:38 cd00854cfb1a Downloading [==================================================>] 6.971kB/6.971kB 13:42:38 cd00854cfb1a Verifying Checksum 13:42:38 cd00854cfb1a Download complete 13:42:38 10ac4908093d Extracting [> ] 327.7kB/30.43MB 13:42:38 5687ac571232 Extracting [=============================================> ] 83.56MB/91.54MB 13:42:38 e8bf24a82546 Extracting [=======================================> ] 142MB/180.3MB 13:42:38 806be17e856d Downloading [===> ] 5.946MB/89.72MB 13:42:38 3ecda1bfd07b Extracting [==================> ] 20.61MB/55.21MB 13:42:38 33e0a01314cc Extracting [===> ] 262.1kB/4.333MB 13:42:38 5687ac571232 Extracting [==================================================>] 91.54MB/91.54MB 13:42:38 10ac4908093d Extracting [=====> ] 3.604MB/30.43MB 13:42:38 22ebf0e44c85 Downloading [> ] 376.1kB/37.02MB 13:42:38 22ebf0e44c85 Downloading [> ] 376.1kB/37.02MB 13:42:38 00b33c871d26 Downloading [> ] 538.9kB/253.3MB 13:42:38 00b33c871d26 Downloading [> ] 538.9kB/253.3MB 13:42:38 e8bf24a82546 Extracting [========================================> ] 144.8MB/180.3MB 13:42:38 33e0a01314cc Extracting [===========================> ] 2.359MB/4.333MB 13:42:38 3ecda1bfd07b Extracting [===================> ] 21.73MB/55.21MB 13:42:38 806be17e856d Downloading [======> ] 11.35MB/89.72MB 13:42:38 257d54e26411 Extracting [==================================================>] 73.93MB/73.93MB 13:42:38 10ac4908093d Extracting [=======> ] 4.588MB/30.43MB 13:42:38 22ebf0e44c85 Downloading [=====> ] 4.143MB/37.02MB 13:42:38 22ebf0e44c85 Downloading [=====> ] 4.143MB/37.02MB 13:42:38 33e0a01314cc Extracting [==============================================> ] 3.998MB/4.333MB 13:42:38 5687ac571232 Pull complete 13:42:38 22ebf0e44c85 Downloading [======> ] 4.52MB/37.02MB 13:42:38 e8bf24a82546 Extracting [========================================> ] 146.5MB/180.3MB 13:42:38 00b33c871d26 Downloading [=> ] 5.896MB/253.3MB 13:42:38 22ebf0e44c85 Downloading [======> ] 4.52MB/37.02MB 13:42:38 00b33c871d26 Downloading [=> ] 5.896MB/253.3MB 13:42:38 806be17e856d Downloading [=======> ] 14.06MB/89.72MB 13:42:38 10ac4908093d Extracting [========> ] 5.243MB/30.43MB 13:42:38 33e0a01314cc Extracting [==================================================>] 4.333MB/4.333MB 13:42:38 3ecda1bfd07b Extracting [=====================> ] 23.95MB/55.21MB 13:42:39 deac262509a5 Extracting [==================================================>] 1.118kB/1.118kB 13:42:39 deac262509a5 Extracting [==================================================>] 1.118kB/1.118kB 13:42:39 00b33c871d26 Downloading [==> ] 15.04MB/253.3MB 13:42:39 00b33c871d26 Downloading [==> ] 15.04MB/253.3MB 13:42:39 806be17e856d Downloading [==========> ] 19.46MB/89.72MB 13:42:39 3ecda1bfd07b Extracting [=======================> ] 26.18MB/55.21MB 13:42:39 22ebf0e44c85 Downloading [================> ] 12.49MB/37.02MB 13:42:39 22ebf0e44c85 Downloading [================> ] 12.49MB/37.02MB 13:42:39 33e0a01314cc Pull complete 13:42:39 e8bf24a82546 Extracting [=========================================> ] 148.7MB/180.3MB 13:42:39 10ac4908093d Extracting [===========> ] 7.209MB/30.43MB 13:42:39 f8b444c6ff40 Extracting [==================================> ] 32.77kB/47.97kB 13:42:39 00b33c871d26 Downloading [===> ] 15.58MB/253.3MB 13:42:39 00b33c871d26 Downloading [===> ] 15.58MB/253.3MB 13:42:39 f8b444c6ff40 Extracting [==================================================>] 47.97kB/47.97kB 13:42:39 e8bf24a82546 Extracting [=========================================> ] 149.3MB/180.3MB 13:42:39 22ebf0e44c85 Downloading [==================> ] 13.61MB/37.02MB 13:42:39 22ebf0e44c85 Downloading [==================> ] 13.61MB/37.02MB 13:42:39 257d54e26411 Pull complete 13:42:39 806be17e856d Downloading [=============> ] 24.87MB/89.72MB 13:42:39 215302b53935 Extracting [==================================================>] 293B/293B 13:42:39 deac262509a5 Pull complete 13:42:39 215302b53935 Extracting [==================================================>] 293B/293B 13:42:39 3ecda1bfd07b Extracting [========================> ] 26.74MB/55.21MB 13:42:39 10ac4908093d Extracting [============> ] 7.537MB/30.43MB 13:42:39 00b33c871d26 Downloading [===> ] 19.36MB/253.3MB 13:42:39 00b33c871d26 Downloading [===> ] 19.36MB/253.3MB 13:42:39 22ebf0e44c85 Downloading [========================> ] 18.14MB/37.02MB 13:42:39 22ebf0e44c85 Downloading [========================> ] 18.14MB/37.02MB 13:42:39 e8bf24a82546 Extracting [=========================================> ] 150.4MB/180.3MB 13:42:39 806be17e856d Downloading [================> ] 30.28MB/89.72MB 13:42:39 10ac4908093d Extracting [=============> ] 8.192MB/30.43MB 13:42:39 00b33c871d26 Downloading [====> ] 22.58MB/253.3MB 13:42:39 00b33c871d26 Downloading [====> ] 22.58MB/253.3MB 13:42:39 22ebf0e44c85 Downloading [===============================> ] 23.04MB/37.02MB 13:42:39 22ebf0e44c85 Downloading [===============================> ] 23.04MB/37.02MB 13:42:39 e8bf24a82546 Extracting [=========================================> ] 151MB/180.3MB 13:42:39 3ecda1bfd07b Extracting [=========================> ] 27.85MB/55.21MB 13:42:39 00b33c871d26 Downloading [=====> ] 27.95MB/253.3MB 13:42:39 00b33c871d26 Downloading [=====> ] 27.95MB/253.3MB 13:42:39 22ebf0e44c85 Downloading [==================================> ] 25.31MB/37.02MB 13:42:39 22ebf0e44c85 Downloading [==================================> ] 25.31MB/37.02MB 13:42:39 806be17e856d Downloading [====================> ] 36.22MB/89.72MB 13:42:39 10ac4908093d Extracting [===============> ] 9.175MB/30.43MB 13:42:39 e8bf24a82546 Extracting [==========================================> ] 152.1MB/180.3MB 13:42:39 3ecda1bfd07b Extracting [==========================> ] 29.52MB/55.21MB 13:42:39 f8b444c6ff40 Pull complete 13:42:39 215302b53935 Pull complete 13:42:39 api Pulled 13:42:39 e6c38e6d3add Extracting [==================================================>] 23.82kB/23.82kB 13:42:39 e6c38e6d3add Extracting [==================================================>] 23.82kB/23.82kB 13:42:39 e8bf24a82546 Extracting [==========================================> ] 153.2MB/180.3MB 13:42:39 00b33c871d26 Downloading [======> ] 34.96MB/253.3MB 13:42:39 00b33c871d26 Downloading [======> ] 34.96MB/253.3MB 13:42:39 806be17e856d Downloading [========================> ] 43.25MB/89.72MB 13:42:39 3ecda1bfd07b Extracting [==============================> ] 33.98MB/55.21MB 13:42:39 eb2f448c7730 Extracting [============> ] 32.77kB/127kB 13:42:39 eb2f448c7730 Extracting [==================================================>] 127kB/127kB 13:42:39 22ebf0e44c85 Downloading [==========================================> ] 31.34MB/37.02MB 13:42:39 22ebf0e44c85 Downloading [==========================================> ] 31.34MB/37.02MB 13:42:39 10ac4908093d Extracting [=================> ] 10.49MB/30.43MB 13:42:39 22ebf0e44c85 Verifying Checksum 13:42:39 22ebf0e44c85 Download complete 13:42:39 22ebf0e44c85 Verifying Checksum 13:42:39 22ebf0e44c85 Download complete 13:42:39 00b33c871d26 Downloading [========> ] 43.58MB/253.3MB 13:42:39 00b33c871d26 Downloading [========> ] 43.58MB/253.3MB 13:42:39 e8bf24a82546 Extracting [==========================================> ] 154.9MB/180.3MB 13:42:39 3ecda1bfd07b Extracting [===================================> ] 38.99MB/55.21MB 13:42:39 806be17e856d Downloading [============================> ] 50.82MB/89.72MB 13:42:39 e6c38e6d3add Pull complete 13:42:39 10ac4908093d Extracting [===================> ] 12.12MB/30.43MB 13:42:39 6b11e56702ad Downloading [> ] 77.31kB/7.707MB 13:42:39 6b11e56702ad Downloading [> ] 77.31kB/7.707MB 13:42:39 22ebf0e44c85 Extracting [> ] 393.2kB/37.02MB 13:42:39 22ebf0e44c85 Extracting [> ] 393.2kB/37.02MB 13:42:40 eb2f448c7730 Pull complete 13:42:40 00b33c871d26 Downloading [========> ] 45.17MB/253.3MB 13:42:40 00b33c871d26 Downloading [========> ] 45.17MB/253.3MB 13:42:40 806be17e856d Downloading [=============================> ] 52.44MB/89.72MB 13:42:40 10ac4908093d Extracting [====================> ] 12.78MB/30.43MB 13:42:40 c8ee90c58894 Extracting [==================================================>] 1.329kB/1.329kB 13:42:40 c8ee90c58894 Extracting [==================================================>] 1.329kB/1.329kB 13:42:40 6b11e56702ad Downloading [===========> ] 1.71MB/7.707MB 13:42:40 6b11e56702ad Downloading [===========> ] 1.71MB/7.707MB 13:42:40 3ecda1bfd07b Extracting [=====================================> ] 41.22MB/55.21MB 13:42:40 e8bf24a82546 Extracting [===========================================> ] 156MB/180.3MB 13:42:40 6ca01427385e Extracting [> ] 557.1kB/61.48MB 13:42:40 22ebf0e44c85 Extracting [==> ] 1.573MB/37.02MB 13:42:40 22ebf0e44c85 Extracting [==> ] 1.573MB/37.02MB 13:42:40 6b11e56702ad Verifying Checksum 13:42:40 6b11e56702ad Verifying Checksum 13:42:40 6b11e56702ad Download complete 13:42:40 6b11e56702ad Download complete 13:42:40 00b33c871d26 Downloading [===========> ] 55.94MB/253.3MB 13:42:40 00b33c871d26 Downloading [===========> ] 55.94MB/253.3MB 13:42:40 806be17e856d Downloading [====================================> ] 65.42MB/89.72MB 13:42:40 10ac4908093d Extracting [=========================> ] 15.73MB/30.43MB 13:42:40 3ecda1bfd07b Extracting [============================================> ] 49.02MB/55.21MB 13:42:40 e8bf24a82546 Extracting [============================================> ] 158.8MB/180.3MB 13:42:40 6ca01427385e Extracting [==> ] 2.785MB/61.48MB 13:42:40 22ebf0e44c85 Extracting [======> ] 4.719MB/37.02MB 13:42:40 22ebf0e44c85 Extracting [======> ] 4.719MB/37.02MB 13:42:40 53d69aa7d3fc Downloading [===> ] 1.369kB/19.96kB 13:42:40 53d69aa7d3fc Downloading [===> ] 1.369kB/19.96kB 13:42:40 53d69aa7d3fc Downloading [==================================================>] 19.96kB/19.96kB 13:42:40 53d69aa7d3fc Downloading [==================================================>] 19.96kB/19.96kB 13:42:40 53d69aa7d3fc Verifying Checksum 13:42:40 53d69aa7d3fc Verifying Checksum 13:42:40 53d69aa7d3fc Download complete 13:42:40 53d69aa7d3fc Download complete 13:42:40 00b33c871d26 Downloading [=============> ] 70.46MB/253.3MB 13:42:40 00b33c871d26 Downloading [=============> ] 70.46MB/253.3MB 13:42:40 806be17e856d Downloading [==========================================> ] 76.77MB/89.72MB 13:42:40 10ac4908093d Extracting [===============================> ] 19.33MB/30.43MB 13:42:40 e8bf24a82546 Extracting [============================================> ] 161MB/180.3MB 13:42:40 10ac4908093d Extracting [=================================> ] 20.64MB/30.43MB 13:42:40 00b33c871d26 Downloading [==============> ] 74.74MB/253.3MB 13:42:40 00b33c871d26 Downloading [==============> ] 74.74MB/253.3MB 13:42:40 6ca01427385e Extracting [====> ] 6.128MB/61.48MB 13:42:40 c8ee90c58894 Pull complete 13:42:40 e8bf24a82546 Extracting [============================================> ] 161.5MB/180.3MB 13:42:40 806be17e856d Downloading [==============================================> ] 83.8MB/89.72MB 13:42:40 10ac4908093d Extracting [==================================> ] 20.97MB/30.43MB 13:42:40 00b33c871d26 Downloading [==============> ] 75.82MB/253.3MB 13:42:40 00b33c871d26 Downloading [==============> ] 75.82MB/253.3MB 13:42:40 22ebf0e44c85 Extracting [=========> ] 6.685MB/37.02MB 13:42:40 22ebf0e44c85 Extracting [=========> ] 6.685MB/37.02MB 13:42:40 a3ab11953ef9 Downloading [> ] 407.8kB/39.52MB 13:42:40 a3ab11953ef9 Downloading [> ] 407.8kB/39.52MB 13:42:40 6ca01427385e Extracting [=====> ] 7.242MB/61.48MB 13:42:40 806be17e856d Downloading [================================================> ] 87.59MB/89.72MB 13:42:40 00b33c871d26 Downloading [===============> ] 79.6MB/253.3MB 13:42:40 00b33c871d26 Downloading [===============> ] 79.6MB/253.3MB 13:42:40 e8bf24a82546 Extracting [=============================================> ] 162.7MB/180.3MB 13:42:40 10ac4908093d Extracting [===================================> ] 21.63MB/30.43MB 13:42:40 22ebf0e44c85 Extracting [==========> ] 7.864MB/37.02MB 13:42:40 22ebf0e44c85 Extracting [==========> ] 7.864MB/37.02MB 13:42:40 3ecda1bfd07b Extracting [=================================================> ] 55.15MB/55.21MB 13:42:40 806be17e856d Verifying Checksum 13:42:40 806be17e856d Download complete 13:42:40 3ecda1bfd07b Extracting [==================================================>] 55.21MB/55.21MB 13:42:40 a3ab11953ef9 Downloading [=> ] 1.227MB/39.52MB 13:42:40 a3ab11953ef9 Downloading [=> ] 1.227MB/39.52MB 13:42:40 00b33c871d26 Downloading [=================> ] 90.86MB/253.3MB 13:42:40 00b33c871d26 Downloading [=================> ] 90.86MB/253.3MB 13:42:40 6ca01427385e Extracting [=======> ] 8.913MB/61.48MB 13:42:40 10ac4908093d Extracting [======================================> ] 23.27MB/30.43MB 13:42:40 91ef9543149d Downloading [================================> ] 719B/1.101kB 13:42:40 91ef9543149d Downloading [================================> ] 719B/1.101kB 13:42:40 91ef9543149d Downloading [==================================================>] 1.101kB/1.101kB 13:42:40 91ef9543149d Downloading [==================================================>] 1.101kB/1.101kB 13:42:40 91ef9543149d Verifying Checksum 13:42:40 91ef9543149d Download complete 13:42:40 91ef9543149d Verifying Checksum 13:42:40 91ef9543149d Download complete 13:42:40 a3ab11953ef9 Downloading [==> ] 2.046MB/39.52MB 13:42:40 a3ab11953ef9 Downloading [==> ] 2.046MB/39.52MB 13:42:40 00b33c871d26 Downloading [===================> ] 99.99MB/253.3MB 13:42:40 00b33c871d26 Downloading [===================> ] 99.99MB/253.3MB 13:42:40 e30cdb86c4f0 Extracting [> ] 557.1kB/98.32MB 13:42:40 2ec4f59af178 Downloading [========================================> ] 721B/881B 13:42:40 2ec4f59af178 Downloading [========================================> ] 721B/881B 13:42:40 2ec4f59af178 Downloading [==================================================>] 881B/881B 13:42:40 2ec4f59af178 Downloading [==================================================>] 881B/881B 13:42:40 2ec4f59af178 Verifying Checksum 13:42:40 2ec4f59af178 Verifying Checksum 13:42:40 2ec4f59af178 Download complete 13:42:40 2ec4f59af178 Download complete 13:42:40 e8bf24a82546 Extracting [=============================================> ] 164.3MB/180.3MB 13:42:40 00b33c871d26 Downloading [====================> ] 103.2MB/253.3MB 13:42:40 00b33c871d26 Downloading [====================> ] 103.2MB/253.3MB 13:42:40 10ac4908093d Extracting [=======================================> ] 23.92MB/30.43MB 13:42:40 3ecda1bfd07b Pull complete 13:42:40 a3ab11953ef9 Downloading [===> ] 2.452MB/39.52MB 13:42:40 a3ab11953ef9 Downloading [===> ] 2.452MB/39.52MB 13:42:40 6ca01427385e Extracting [========> ] 10.58MB/61.48MB 13:42:40 22ebf0e44c85 Extracting [=============> ] 9.83MB/37.02MB 13:42:40 22ebf0e44c85 Extracting [=============> ] 9.83MB/37.02MB 13:42:41 e30cdb86c4f0 Extracting [==> ] 5.571MB/98.32MB 13:42:41 00b33c871d26 Downloading [=====================> ] 110.7MB/253.3MB 13:42:41 00b33c871d26 Downloading [=====================> ] 110.7MB/253.3MB 13:42:41 10ac4908093d Extracting [=========================================> ] 25.56MB/30.43MB 13:42:41 e8bf24a82546 Extracting [=============================================> ] 165.4MB/180.3MB 13:42:41 8b7e81cd5ef1 Downloading [==================================================>] 131B/131B 13:42:41 8b7e81cd5ef1 Verifying Checksum 13:42:41 8b7e81cd5ef1 Verifying Checksum 13:42:41 8b7e81cd5ef1 Download complete 13:42:41 8b7e81cd5ef1 Download complete 13:42:41 ac9f4de4b762 Extracting [> ] 524.3kB/50.13MB 13:42:41 a3ab11953ef9 Downloading [======> ] 5.315MB/39.52MB 13:42:41 a3ab11953ef9 Downloading [======> ] 5.315MB/39.52MB 13:42:41 22ebf0e44c85 Extracting [================> ] 12.58MB/37.02MB 13:42:41 22ebf0e44c85 Extracting [================> ] 12.58MB/37.02MB 13:42:41 e30cdb86c4f0 Extracting [=====> ] 11.14MB/98.32MB 13:42:41 00b33c871d26 Downloading [=======================> ] 120.9MB/253.3MB 13:42:41 00b33c871d26 Downloading [=======================> ] 120.9MB/253.3MB 13:42:41 6ca01427385e Extracting [=========> ] 11.7MB/61.48MB 13:42:41 10ac4908093d Extracting [===========================================> ] 26.54MB/30.43MB 13:42:41 e8bf24a82546 Extracting [===============================================> ] 170.5MB/180.3MB 13:42:41 a3ab11953ef9 Downloading [===========> ] 8.985MB/39.52MB 13:42:41 a3ab11953ef9 Downloading [===========> ] 8.985MB/39.52MB 13:42:41 22ebf0e44c85 Extracting [====================> ] 15.34MB/37.02MB 13:42:41 22ebf0e44c85 Extracting [====================> ] 15.34MB/37.02MB 13:42:41 ac9f4de4b762 Extracting [===> ] 3.67MB/50.13MB 13:42:41 c52916c1316e Downloading [==================================================>] 171B/171B 13:42:41 c52916c1316e Downloading [==================================================>] 171B/171B 13:42:41 c52916c1316e Verifying Checksum 13:42:41 c52916c1316e Download complete 13:42:41 c52916c1316e Verifying Checksum 13:42:41 c52916c1316e Download complete 13:42:41 00b33c871d26 Downloading [==========================> ] 132.7MB/253.3MB 13:42:41 00b33c871d26 Downloading [==========================> ] 132.7MB/253.3MB 13:42:41 e30cdb86c4f0 Extracting [=========> ] 17.83MB/98.32MB 13:42:41 6ca01427385e Extracting [============> ] 15.04MB/61.48MB 13:42:41 10ac4908093d Extracting [===============================================> ] 28.84MB/30.43MB 13:42:41 e8bf24a82546 Extracting [===============================================> ] 172.1MB/180.3MB 13:42:41 a3ab11953ef9 Downloading [==================> ] 14.24MB/39.52MB 13:42:41 a3ab11953ef9 Downloading [==================> ] 14.24MB/39.52MB 13:42:41 ac9f4de4b762 Extracting [======> ] 6.291MB/50.13MB 13:42:41 00b33c871d26 Downloading [=============================> ] 147.7MB/253.3MB 13:42:41 00b33c871d26 Downloading [=============================> ] 147.7MB/253.3MB 13:42:41 e30cdb86c4f0 Extracting [=============> ] 26.18MB/98.32MB 13:42:41 6ca01427385e Extracting [==============> ] 18.38MB/61.48MB 13:42:41 d93f69e96600 Downloading [> ] 539.9kB/115.2MB 13:42:41 a3ab11953ef9 Downloading [============================> ] 22.74MB/39.52MB 13:42:41 a3ab11953ef9 Downloading [============================> ] 22.74MB/39.52MB 13:42:41 22ebf0e44c85 Extracting [========================> ] 18.09MB/37.02MB 13:42:41 22ebf0e44c85 Extracting [========================> ] 18.09MB/37.02MB 13:42:41 10ac4908093d Extracting [===============================================> ] 29.16MB/30.43MB 13:42:41 e8bf24a82546 Extracting [===============================================> ] 172.7MB/180.3MB 13:42:41 ac9f4de4b762 Extracting [=========> ] 9.961MB/50.13MB 13:42:41 00b33c871d26 Downloading [===============================> ] 161.1MB/253.3MB 13:42:41 00b33c871d26 Downloading [===============================> ] 161.1MB/253.3MB 13:42:41 e30cdb86c4f0 Extracting [=================> ] 34.54MB/98.32MB 13:42:41 d93f69e96600 Downloading [====> ] 10.22MB/115.2MB 13:42:41 6ca01427385e Extracting [==================> ] 22.28MB/61.48MB 13:42:41 a3ab11953ef9 Downloading [======================================> ] 30.45MB/39.52MB 13:42:41 a3ab11953ef9 Downloading [======================================> ] 30.45MB/39.52MB 13:42:41 22ebf0e44c85 Extracting [=============================> ] 21.63MB/37.02MB 13:42:41 22ebf0e44c85 Extracting [=============================> ] 21.63MB/37.02MB 13:42:41 e8bf24a82546 Extracting [================================================> ] 174.4MB/180.3MB 13:42:41 10ac4908093d Extracting [=================================================> ] 30.15MB/30.43MB 13:42:41 ac9f4de4b762 Extracting [=============> ] 13.11MB/50.13MB 13:42:41 00b33c871d26 Downloading [==================================> ] 172.9MB/253.3MB 13:42:41 00b33c871d26 Downloading [==================================> ] 172.9MB/253.3MB 13:42:41 e30cdb86c4f0 Extracting [=====================> ] 41.78MB/98.32MB 13:42:41 d93f69e96600 Downloading [========> ] 18.8MB/115.2MB 13:42:41 6ca01427385e Extracting [===================> ] 23.95MB/61.48MB 13:42:41 a3ab11953ef9 Downloading [=================================================> ] 38.97MB/39.52MB 13:42:41 a3ab11953ef9 Downloading [=================================================> ] 38.97MB/39.52MB 13:42:41 a3ab11953ef9 Verifying Checksum 13:42:41 a3ab11953ef9 Download complete 13:42:41 a3ab11953ef9 Verifying Checksum 13:42:41 a3ab11953ef9 Download complete 13:42:41 e8bf24a82546 Extracting [================================================> ] 175.5MB/180.3MB 13:42:41 22ebf0e44c85 Extracting [================================> ] 23.99MB/37.02MB 13:42:41 22ebf0e44c85 Extracting [================================> ] 23.99MB/37.02MB 13:42:41 00b33c871d26 Downloading [===================================> ] 179.9MB/253.3MB 13:42:41 00b33c871d26 Downloading [===================================> ] 179.9MB/253.3MB 13:42:41 10ac4908093d Extracting [==================================================>] 30.43MB/30.43MB 13:42:41 e30cdb86c4f0 Extracting [=========================> ] 50.69MB/98.32MB 13:42:41 ac9f4de4b762 Extracting [===============> ] 15.73MB/50.13MB 13:42:41 d93f69e96600 Downloading [=============> ] 30.1MB/115.2MB 13:42:41 6ca01427385e Extracting [=====================> ] 26.18MB/61.48MB 13:42:41 22ebf0e44c85 Extracting [====================================> ] 27.13MB/37.02MB 13:42:41 22ebf0e44c85 Extracting [====================================> ] 27.13MB/37.02MB 13:42:41 e8bf24a82546 Extracting [=================================================> ] 177.1MB/180.3MB 13:42:41 00b33c871d26 Downloading [=====================================> ] 188MB/253.3MB 13:42:41 00b33c871d26 Downloading [=====================================> ] 188MB/253.3MB 13:42:41 e30cdb86c4f0 Extracting [=============================> ] 57.38MB/98.32MB 13:42:41 ac9f4de4b762 Extracting [=================> ] 17.3MB/50.13MB 13:42:41 d93f69e96600 Downloading [================> ] 37.62MB/115.2MB 13:42:41 bbb9d15c45a1 Downloading [=========> ] 719B/3.633kB 13:42:41 bbb9d15c45a1 Downloading [==================================================>] 3.633kB/3.633kB 13:42:41 bbb9d15c45a1 Verifying Checksum 13:42:41 bbb9d15c45a1 Download complete 13:42:41 6ca01427385e Extracting [=======================> ] 28.97MB/61.48MB 13:42:41 e8bf24a82546 Extracting [=================================================> ] 178.8MB/180.3MB 13:42:41 22ebf0e44c85 Extracting [========================================> ] 29.88MB/37.02MB 13:42:41 22ebf0e44c85 Extracting [========================================> ] 29.88MB/37.02MB 13:42:41 00b33c871d26 Downloading [======================================> ] 196.6MB/253.3MB 13:42:41 00b33c871d26 Downloading [======================================> ] 196.6MB/253.3MB 13:42:41 e30cdb86c4f0 Extracting [===============================> ] 62.39MB/98.32MB 13:42:41 d93f69e96600 Downloading [====================> ] 46.76MB/115.2MB 13:42:41 ac9f4de4b762 Extracting [==================> ] 18.35MB/50.13MB 13:42:41 6ca01427385e Extracting [==========================> ] 32.31MB/61.48MB 13:42:41 00b33c871d26 Downloading [=========================================> ] 207.9MB/253.3MB 13:42:41 00b33c871d26 Downloading [=========================================> ] 207.9MB/253.3MB 13:42:41 22ebf0e44c85 Extracting [=============================================> ] 33.42MB/37.02MB 13:42:41 22ebf0e44c85 Extracting [=============================================> ] 33.42MB/37.02MB 13:42:41 e30cdb86c4f0 Extracting [===================================> ] 69.63MB/98.32MB 13:42:41 d93f69e96600 Downloading [========================> ] 55.35MB/115.2MB 13:42:42 7a1cb9ad7f75 Downloading [> ] 538.9kB/115.2MB 13:42:42 ac9f4de4b762 Extracting [=======================> ] 23.07MB/50.13MB 13:42:42 7a1cb9ad7f75 Downloading [> ] 2.145MB/115.2MB 13:42:42 d93f69e96600 Downloading [==========================> ] 60.16MB/115.2MB 13:42:42 e30cdb86c4f0 Extracting [=====================================> ] 73.53MB/98.32MB 13:42:42 00b33c871d26 Downloading [==========================================> ] 214.3MB/253.3MB 13:42:42 00b33c871d26 Downloading [==========================================> ] 214.3MB/253.3MB 13:42:42 6ca01427385e Extracting [============================> ] 35.09MB/61.48MB 13:42:42 22ebf0e44c85 Extracting [==============================================> ] 34.21MB/37.02MB 13:42:42 22ebf0e44c85 Extracting [==============================================> ] 34.21MB/37.02MB 13:42:42 e8bf24a82546 Extracting [==================================================>] 180.3MB/180.3MB 13:42:42 ac9f4de4b762 Extracting [=======================> ] 23.59MB/50.13MB 13:42:42 d93f69e96600 Downloading [============================> ] 66.62MB/115.2MB 13:42:42 7a1cb9ad7f75 Downloading [===> ] 8.039MB/115.2MB 13:42:42 00b33c871d26 Downloading [===========================================> ] 221.3MB/253.3MB 13:42:42 00b33c871d26 Downloading [===========================================> ] 221.3MB/253.3MB 13:42:42 ac9f4de4b762 Extracting [========================> ] 24.12MB/50.13MB 13:42:42 e30cdb86c4f0 Extracting [======================================> ] 75.76MB/98.32MB 13:42:42 22ebf0e44c85 Extracting [==============================================> ] 34.6MB/37.02MB 13:42:42 22ebf0e44c85 Extracting [==============================================> ] 34.6MB/37.02MB 13:42:42 6ca01427385e Extracting [=============================> ] 36.21MB/61.48MB 13:42:42 7a1cb9ad7f75 Downloading [======> ] 15.03MB/115.2MB 13:42:42 d93f69e96600 Downloading [================================> ] 74.14MB/115.2MB 13:42:42 10ac4908093d Pull complete 13:42:42 00b33c871d26 Downloading [============================================> ] 227.2MB/253.3MB 13:42:42 00b33c871d26 Downloading [============================================> ] 227.2MB/253.3MB 13:42:42 e30cdb86c4f0 Extracting [========================================> ] 80.22MB/98.32MB 13:42:42 ac9f4de4b762 Extracting [=========================> ] 25.69MB/50.13MB 13:42:42 00b33c871d26 Downloading [==============================================> ] 237.4MB/253.3MB 13:42:42 00b33c871d26 Downloading [==============================================> ] 237.4MB/253.3MB 13:42:42 ac9f4de4b762 Extracting [==========================================> ] 42.99MB/50.13MB 13:42:42 7a1cb9ad7f75 Downloading [======> ] 15.57MB/115.2MB 13:42:42 d93f69e96600 Downloading [================================> ] 75.21MB/115.2MB 13:42:42 22ebf0e44c85 Extracting [================================================> ] 35.78MB/37.02MB 13:42:42 22ebf0e44c85 Extracting [================================================> ] 35.78MB/37.02MB 13:42:42 e30cdb86c4f0 Extracting [=========================================> ] 80.77MB/98.32MB 13:42:42 6ca01427385e Extracting [==============================> ] 37.88MB/61.48MB 13:42:42 22ebf0e44c85 Extracting [==================================================>] 37.02MB/37.02MB 13:42:42 22ebf0e44c85 Extracting [==================================================>] 37.02MB/37.02MB 13:42:42 00b33c871d26 Downloading [================================================> ] 244.9MB/253.3MB 13:42:42 00b33c871d26 Downloading [================================================> ] 244.9MB/253.3MB 13:42:42 d93f69e96600 Downloading [===================================> ] 82.71MB/115.2MB 13:42:42 7a1cb9ad7f75 Downloading [=========> ] 22.03MB/115.2MB 13:42:42 e30cdb86c4f0 Extracting [==============================================> ] 90.8MB/98.32MB 13:42:42 ac9f4de4b762 Extracting [=================================================> ] 49.81MB/50.13MB 13:42:42 6ca01427385e Extracting [=================================> ] 41.78MB/61.48MB 13:42:42 ac9f4de4b762 Extracting [==================================================>] 50.13MB/50.13MB 13:42:42 e30cdb86c4f0 Extracting [==================================================>] 98.32MB/98.32MB 13:42:42 d93f69e96600 Downloading [=====================================> ] 87.04MB/115.2MB 13:42:42 00b33c871d26 Downloading [=================================================> ] 250.8MB/253.3MB 13:42:42 00b33c871d26 Downloading [=================================================> ] 250.8MB/253.3MB 13:42:42 7a1cb9ad7f75 Downloading [===========> ] 26.85MB/115.2MB 13:42:42 6ca01427385e Extracting [====================================> ] 44.56MB/61.48MB 13:42:42 00b33c871d26 Verifying Checksum 13:42:42 00b33c871d26 Download complete 13:42:42 00b33c871d26 Verifying Checksum 13:42:42 00b33c871d26 Download complete 13:42:42 d93f69e96600 Downloading [========================================> ] 93.44MB/115.2MB 13:42:42 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 13:42:42 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 13:42:42 e8bf24a82546 Pull complete 13:42:43 7a1cb9ad7f75 Downloading [==============> ] 33.86MB/115.2MB 13:42:43 6ca01427385e Extracting [========================================> ] 49.58MB/61.48MB 13:42:43 0a92c7dea7af Downloading [==========> ] 720B/3.449kB 13:42:43 0a92c7dea7af Downloading [==================================================>] 3.449kB/3.449kB 13:42:43 0a92c7dea7af Verifying Checksum 13:42:43 0a92c7dea7af Download complete 13:42:43 d93f69e96600 Downloading [===========================================> ] 100.9MB/115.2MB 13:42:43 7a1cb9ad7f75 Downloading [=================> ] 40.83MB/115.2MB 13:42:43 6ca01427385e Extracting [===========================================> ] 52.92MB/61.48MB 13:42:43 d93f69e96600 Downloading [================================================> ] 111.1MB/115.2MB 13:42:43 d93f69e96600 Verifying Checksum 13:42:43 d93f69e96600 Download complete 13:42:43 6ca01427385e Extracting [==============================================> ] 57.38MB/61.48MB 13:42:43 7a1cb9ad7f75 Downloading [======================> ] 52.13MB/115.2MB 13:42:43 7a1cb9ad7f75 Downloading [===========================> ] 64.47MB/115.2MB 13:42:43 6ca01427385e Extracting [================================================> ] 59.6MB/61.48MB 13:42:43 6ca01427385e Extracting [==================================================>] 61.48MB/61.48MB 13:42:43 7a1cb9ad7f75 Downloading [==================================> ] 78.94MB/115.2MB 13:42:43 7a1cb9ad7f75 Downloading [========================================> ] 92.38MB/115.2MB 13:42:43 7a1cb9ad7f75 Downloading [=============================================> ] 105.2MB/115.2MB 13:42:43 7a1cb9ad7f75 Verifying Checksum 13:42:43 7a1cb9ad7f75 Download complete 13:42:43 154b803e2d93 Extracting [===================> ] 32.77kB/84.13kB 13:42:43 154b803e2d93 Extracting [==================================================>] 84.13kB/84.13kB 13:42:43 154b803e2d93 Extracting [==================================================>] 84.13kB/84.13kB 13:42:43 22ebf0e44c85 Pull complete 13:42:43 22ebf0e44c85 Pull complete 13:42:44 e30cdb86c4f0 Pull complete 13:42:44 ac9f4de4b762 Pull complete 13:42:44 00b33c871d26 Extracting [> ] 557.1kB/253.3MB 13:42:44 00b33c871d26 Extracting [> ] 557.1kB/253.3MB 13:42:44 44779101e748 Pull complete 13:42:44 00b33c871d26 Extracting [==> ] 12.26MB/253.3MB 13:42:44 00b33c871d26 Extracting [==> ] 12.26MB/253.3MB 13:42:44 00b33c871d26 Extracting [=====> ] 25.62MB/253.3MB 13:42:44 00b33c871d26 Extracting [=====> ] 25.62MB/253.3MB 13:42:44 00b33c871d26 Extracting [=====> ] 28.97MB/253.3MB 13:42:44 00b33c871d26 Extracting [=====> ] 28.97MB/253.3MB 13:42:44 6ca01427385e Pull complete 13:42:44 ea63b2e6315f Extracting [==================================================>] 605B/605B 13:42:44 ea63b2e6315f Extracting [==================================================>] 605B/605B 13:42:45 00b33c871d26 Extracting [=======> ] 38.99MB/253.3MB 13:42:45 00b33c871d26 Extracting [=======> ] 38.99MB/253.3MB 13:42:46 00b33c871d26 Extracting [========> ] 43.45MB/253.3MB 13:42:46 00b33c871d26 Extracting [========> ] 43.45MB/253.3MB 13:42:46 00b33c871d26 Extracting [==========> ] 54.59MB/253.3MB 13:42:46 00b33c871d26 Extracting [==========> ] 54.59MB/253.3MB 13:42:46 00b33c871d26 Extracting [============> ] 64.06MB/253.3MB 13:42:46 00b33c871d26 Extracting [============> ] 64.06MB/253.3MB 13:42:46 00b33c871d26 Extracting [==============> ] 73.53MB/253.3MB 13:42:46 00b33c871d26 Extracting [==============> ] 73.53MB/253.3MB 13:42:46 00b33c871d26 Extracting [===============> ] 79.66MB/253.3MB 13:42:46 00b33c871d26 Extracting [===============> ] 79.66MB/253.3MB 13:42:46 c990b7e46fc8 Extracting [==================================================>] 1.299kB/1.299kB 13:42:46 c990b7e46fc8 Extracting [==================================================>] 1.299kB/1.299kB 13:42:46 00b33c871d26 Extracting [=================> ] 89.69MB/253.3MB 13:42:46 00b33c871d26 Extracting [=================> ] 89.69MB/253.3MB 13:42:47 154b803e2d93 Pull complete 13:42:47 00b33c871d26 Extracting [=================> ] 90.8MB/253.3MB 13:42:47 00b33c871d26 Extracting [=================> ] 90.8MB/253.3MB 13:42:47 00b33c871d26 Extracting [====================> ] 103.6MB/253.3MB 13:42:47 00b33c871d26 Extracting [====================> ] 103.6MB/253.3MB 13:42:47 00b33c871d26 Extracting [=====================> ] 111.4MB/253.3MB 13:42:47 00b33c871d26 Extracting [=====================> ] 111.4MB/253.3MB 13:42:47 00b33c871d26 Extracting [=======================> ] 117MB/253.3MB 13:42:47 00b33c871d26 Extracting [=======================> ] 117MB/253.3MB 13:42:47 00b33c871d26 Extracting [=======================> ] 120.9MB/253.3MB 13:42:47 00b33c871d26 Extracting [=======================> ] 120.9MB/253.3MB 13:42:47 00b33c871d26 Extracting [========================> ] 124.2MB/253.3MB 13:42:47 00b33c871d26 Extracting [========================> ] 124.2MB/253.3MB 13:42:48 00b33c871d26 Extracting [=========================> ] 130.4MB/253.3MB 13:42:48 00b33c871d26 Extracting [=========================> ] 130.4MB/253.3MB 13:42:48 00b33c871d26 Extracting [==========================> ] 133.7MB/253.3MB 13:42:48 00b33c871d26 Extracting [==========================> ] 133.7MB/253.3MB 13:42:48 00b33c871d26 Extracting [===========================> ] 139.3MB/253.3MB 13:42:48 00b33c871d26 Extracting [===========================> ] 139.3MB/253.3MB 13:42:48 00b33c871d26 Extracting [============================> ] 144.3MB/253.3MB 13:42:48 00b33c871d26 Extracting [============================> ] 144.3MB/253.3MB 13:42:48 00b33c871d26 Extracting [=============================> ] 149.8MB/253.3MB 13:42:48 00b33c871d26 Extracting [=============================> ] 149.8MB/253.3MB 13:42:48 00b33c871d26 Extracting [==============================> ] 154.9MB/253.3MB 13:42:48 00b33c871d26 Extracting [==============================> ] 154.9MB/253.3MB 13:42:48 00b33c871d26 Extracting [===============================> ] 159.9MB/253.3MB 13:42:48 00b33c871d26 Extracting [===============================> ] 159.9MB/253.3MB 13:42:48 00b33c871d26 Extracting [================================> ] 164.9MB/253.3MB 13:42:48 00b33c871d26 Extracting [================================> ] 164.9MB/253.3MB 13:42:49 00b33c871d26 Extracting [=================================> ] 169.9MB/253.3MB 13:42:49 00b33c871d26 Extracting [=================================> ] 169.9MB/253.3MB 13:42:49 00b33c871d26 Extracting [=================================> ] 171.6MB/253.3MB 13:42:49 00b33c871d26 Extracting [=================================> ] 171.6MB/253.3MB 13:42:49 00b33c871d26 Extracting [==================================> ] 173.2MB/253.3MB 13:42:49 00b33c871d26 Extracting [==================================> ] 173.2MB/253.3MB 13:42:49 00b33c871d26 Extracting [==================================> ] 174.4MB/253.3MB 13:42:49 00b33c871d26 Extracting [==================================> ] 174.4MB/253.3MB 13:42:50 00b33c871d26 Extracting [==================================> ] 177.1MB/253.3MB 13:42:50 00b33c871d26 Extracting [==================================> ] 177.1MB/253.3MB 13:42:50 00b33c871d26 Extracting [===================================> ] 181.6MB/253.3MB 13:42:50 00b33c871d26 Extracting [===================================> ] 181.6MB/253.3MB 13:42:50 00b33c871d26 Extracting [====================================> ] 186.6MB/253.3MB 13:42:50 00b33c871d26 Extracting [====================================> ] 186.6MB/253.3MB 13:42:50 00b33c871d26 Extracting [=====================================> ] 188.3MB/253.3MB 13:42:50 00b33c871d26 Extracting [=====================================> ] 188.3MB/253.3MB 13:42:50 00b33c871d26 Extracting [=====================================> ] 189.4MB/253.3MB 13:42:50 00b33c871d26 Extracting [=====================================> ] 189.4MB/253.3MB 13:42:50 00b33c871d26 Extracting [=====================================> ] 192.2MB/253.3MB 13:42:50 00b33c871d26 Extracting [=====================================> ] 192.2MB/253.3MB 13:42:50 00b33c871d26 Extracting [======================================> ] 195.5MB/253.3MB 13:42:50 00b33c871d26 Extracting [======================================> ] 195.5MB/253.3MB 13:42:50 00b33c871d26 Extracting [=======================================> ] 198.3MB/253.3MB 13:42:50 00b33c871d26 Extracting [=======================================> ] 198.3MB/253.3MB 13:42:50 00b33c871d26 Extracting [=======================================> ] 200MB/253.3MB 13:42:50 00b33c871d26 Extracting [=======================================> ] 200MB/253.3MB 13:42:51 00b33c871d26 Extracting [========================================> ] 202.8MB/253.3MB 13:42:51 00b33c871d26 Extracting [========================================> ] 202.8MB/253.3MB 13:42:51 00b33c871d26 Extracting [========================================> ] 204.4MB/253.3MB 13:42:51 00b33c871d26 Extracting [========================================> ] 204.4MB/253.3MB 13:42:51 00b33c871d26 Extracting [=========================================> ] 207.8MB/253.3MB 13:42:51 00b33c871d26 Extracting [=========================================> ] 207.8MB/253.3MB 13:42:51 00b33c871d26 Extracting [=========================================> ] 211.7MB/253.3MB 13:42:51 00b33c871d26 Extracting [=========================================> ] 211.7MB/253.3MB 13:42:51 00b33c871d26 Extracting [==========================================> ] 215MB/253.3MB 13:42:51 00b33c871d26 Extracting [==========================================> ] 215MB/253.3MB 13:42:51 00b33c871d26 Extracting [===========================================> ] 218.4MB/253.3MB 13:42:51 00b33c871d26 Extracting [===========================================> ] 218.4MB/253.3MB 13:42:51 00b33c871d26 Extracting [===========================================> ] 220.6MB/253.3MB 13:42:51 00b33c871d26 Extracting [===========================================> ] 220.6MB/253.3MB 13:42:51 00b33c871d26 Extracting [===========================================> ] 222.8MB/253.3MB 13:42:51 00b33c871d26 Extracting [===========================================> ] 222.8MB/253.3MB 13:42:51 00b33c871d26 Extracting [============================================> ] 227.8MB/253.3MB 13:42:51 00b33c871d26 Extracting [============================================> ] 227.8MB/253.3MB 13:42:52 00b33c871d26 Extracting [=============================================> ] 231.2MB/253.3MB 13:42:52 00b33c871d26 Extracting [=============================================> ] 231.2MB/253.3MB 13:42:53 00b33c871d26 Extracting [=============================================> ] 232.8MB/253.3MB 13:42:53 00b33c871d26 Extracting [=============================================> ] 232.8MB/253.3MB 13:42:53 00b33c871d26 Extracting [==============================================> ] 237.9MB/253.3MB 13:42:53 00b33c871d26 Extracting [==============================================> ] 237.9MB/253.3MB 13:42:53 e35e8e85e24d Extracting [> ] 524.3kB/50.55MB 13:42:53 e35e8e85e24d Extracting [=> ] 1.049MB/50.55MB 13:42:53 00b33c871d26 Extracting [===============================================> ] 241.2MB/253.3MB 13:42:53 00b33c871d26 Extracting [===============================================> ] 241.2MB/253.3MB 13:42:53 ea63b2e6315f Pull complete 13:42:53 e35e8e85e24d Extracting [=> ] 1.573MB/50.55MB 13:42:54 c990b7e46fc8 Pull complete 13:42:54 a721db3e3f3d Extracting [> ] 65.54kB/5.526MB 13:42:54 00b33c871d26 Extracting [================================================> ] 246.8MB/253.3MB 13:42:54 00b33c871d26 Extracting [================================================> ] 246.8MB/253.3MB 13:42:54 e35e8e85e24d Extracting [==> ] 2.097MB/50.55MB 13:42:54 e35e8e85e24d Extracting [==> ] 2.621MB/50.55MB 13:42:54 a721db3e3f3d Extracting [==> ] 262.1kB/5.526MB 13:42:54 00b33c871d26 Extracting [=================================================> ] 249.6MB/253.3MB 13:42:54 00b33c871d26 Extracting [=================================================> ] 249.6MB/253.3MB 13:42:55 a721db3e3f3d Extracting [=======> ] 786.4kB/5.526MB 13:42:55 00b33c871d26 Extracting [=================================================> ] 250.1MB/253.3MB 13:42:55 00b33c871d26 Extracting [=================================================> ] 250.1MB/253.3MB 13:42:56 e4305231c991 Extracting [==================================================>] 92B/92B 13:42:56 e4305231c991 Extracting [==================================================>] 92B/92B 13:42:56 00b33c871d26 Extracting [=================================================> ] 250.7MB/253.3MB 13:42:56 00b33c871d26 Extracting [=================================================> ] 250.7MB/253.3MB 13:42:56 a721db3e3f3d Extracting [==================> ] 2.032MB/5.526MB 13:42:56 e35e8e85e24d Extracting [===> ] 3.67MB/50.55MB 13:42:56 a721db3e3f3d Extracting [=============================> ] 3.277MB/5.526MB 13:42:56 00b33c871d26 Extracting [=================================================> ] 252.9MB/253.3MB 13:42:56 00b33c871d26 Extracting [=================================================> ] 252.9MB/253.3MB 13:42:56 00b33c871d26 Extracting [==================================================>] 253.3MB/253.3MB 13:42:56 00b33c871d26 Extracting [==================================================>] 253.3MB/253.3MB 13:42:56 a721db3e3f3d Extracting [========================================> ] 4.456MB/5.526MB 13:42:57 a721db3e3f3d Extracting [========================================> ] 4.522MB/5.526MB 13:42:57 e35e8e85e24d Extracting [====> ] 4.194MB/50.55MB 13:42:57 fbd390d3bd00 Extracting [==================================================>] 2.675kB/2.675kB 13:42:57 fbd390d3bd00 Extracting [==================================================>] 2.675kB/2.675kB 13:42:57 e35e8e85e24d Extracting [======> ] 6.816MB/50.55MB 13:42:57 e4305231c991 Pull complete 13:42:57 00b33c871d26 Pull complete 13:42:57 00b33c871d26 Pull complete 13:42:57 a721db3e3f3d Extracting [===========================================> ] 4.85MB/5.526MB 13:42:57 e35e8e85e24d Extracting [=======> ] 7.864MB/50.55MB 13:42:57 a721db3e3f3d Extracting [==================================================>] 5.526MB/5.526MB 13:42:57 6b11e56702ad Extracting [> ] 98.3kB/7.707MB 13:42:57 6b11e56702ad Extracting [> ] 98.3kB/7.707MB 13:42:57 e35e8e85e24d Extracting [========> ] 8.913MB/50.55MB 13:42:58 6b11e56702ad Extracting [===> ] 491.5kB/7.707MB 13:42:58 6b11e56702ad Extracting [===> ] 491.5kB/7.707MB 13:42:58 e35e8e85e24d Extracting [=========> ] 9.437MB/50.55MB 13:42:58 6b11e56702ad Extracting [==================================================>] 7.707MB/7.707MB 13:42:58 6b11e56702ad Extracting [==================================================>] 7.707MB/7.707MB 13:42:58 e35e8e85e24d Extracting [==========> ] 11.01MB/50.55MB 13:42:58 e35e8e85e24d Extracting [==============> ] 14.16MB/50.55MB 13:42:58 e35e8e85e24d Extracting [==================> ] 18.35MB/50.55MB 13:42:58 e35e8e85e24d Extracting [=====================> ] 21.5MB/50.55MB 13:42:58 f469048fbe8d Extracting [==================================================>] 92B/92B 13:42:58 f469048fbe8d Extracting [==================================================>] 92B/92B 13:42:59 e35e8e85e24d Extracting [=======================> ] 23.59MB/50.55MB 13:42:59 e35e8e85e24d Extracting [==========================> ] 26.74MB/50.55MB 13:42:59 e35e8e85e24d Extracting [=============================> ] 29.88MB/50.55MB 13:42:59 e35e8e85e24d Extracting [=================================> ] 34.08MB/50.55MB 13:42:59 e35e8e85e24d Extracting [======================================> ] 38.8MB/50.55MB 13:42:59 e35e8e85e24d Extracting [==========================================> ] 42.47MB/50.55MB 13:43:00 e35e8e85e24d Extracting [================================================> ] 48.76MB/50.55MB 13:43:00 e35e8e85e24d Extracting [==================================================>] 50.55MB/50.55MB 13:43:01 pap Pulled 13:43:02 fbd390d3bd00 Pull complete 13:43:02 a721db3e3f3d Pull complete 13:43:07 f469048fbe8d Pull complete 13:43:07 6b11e56702ad Pull complete 13:43:07 6b11e56702ad Pull complete 13:43:07 9b1ac15ef728 Extracting [==================================================>] 3.087kB/3.087kB 13:43:07 9b1ac15ef728 Extracting [==================================================>] 3.087kB/3.087kB 13:43:08 e35e8e85e24d Pull complete 13:43:09 1850a929b84a Extracting [==================================================>] 149B/149B 13:43:09 1850a929b84a Extracting [==================================================>] 149B/149B 13:43:15 9b1ac15ef728 Pull complete 13:43:15 c189e028fabb Extracting [==================================================>] 300B/300B 13:43:15 c189e028fabb Extracting [==================================================>] 300B/300B 13:43:15 1850a929b84a Pull complete 13:43:15 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 13:43:15 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 13:43:15 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 13:43:15 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 13:43:16 d0bef95bc6b2 Extracting [==================================================>] 11.92kB/11.92kB 13:43:16 d0bef95bc6b2 Extracting [==================================================>] 11.92kB/11.92kB 13:43:18 8682f304eb80 Extracting [==================================================>] 4.023kB/4.023kB 13:43:18 8682f304eb80 Extracting [==================================================>] 4.023kB/4.023kB 13:43:21 c189e028fabb Pull complete 13:43:22 53d69aa7d3fc Pull complete 13:43:22 53d69aa7d3fc Pull complete 13:43:22 397a918c7da3 Extracting [==================================================>] 327B/327B 13:43:22 397a918c7da3 Extracting [==================================================>] 327B/327B 13:43:25 d0bef95bc6b2 Pull complete 13:43:25 c9bd119720e4 Extracting [> ] 557.1kB/246.3MB 13:43:25 c9bd119720e4 Extracting [> ] 2.228MB/246.3MB 13:43:25 c9bd119720e4 Extracting [===> ] 18.38MB/246.3MB 13:43:25 c9bd119720e4 Extracting [======> ] 32.31MB/246.3MB 13:43:25 c9bd119720e4 Extracting [=========> ] 47.35MB/246.3MB 13:43:26 c9bd119720e4 Extracting [============> ] 62.95MB/246.3MB 13:43:26 c9bd119720e4 Extracting [================> ] 79.66MB/246.3MB 13:43:26 c9bd119720e4 Extracting [===================> ] 95.26MB/246.3MB 13:43:26 c9bd119720e4 Extracting [=====================> ] 106.4MB/246.3MB 13:43:26 c9bd119720e4 Extracting [========================> ] 119.8MB/246.3MB 13:43:26 c9bd119720e4 Extracting [===========================> ] 135.9MB/246.3MB 13:43:26 c9bd119720e4 Extracting [==============================> ] 151.5MB/246.3MB 13:43:26 c9bd119720e4 Extracting [==================================> ] 168.2MB/246.3MB 13:43:26 c9bd119720e4 Extracting [=====================================> ] 183.3MB/246.3MB 13:43:27 c9bd119720e4 Extracting [=======================================> ] 192.7MB/246.3MB 13:43:27 8682f304eb80 Pull complete 13:43:27 c9bd119720e4 Extracting [==========================================> ] 208.9MB/246.3MB 13:43:27 c9bd119720e4 Extracting [=============================================> ] 225.1MB/246.3MB 13:43:27 c9bd119720e4 Extracting [=================================================> ] 242.3MB/246.3MB 13:43:27 c9bd119720e4 Extracting [==================================================>] 246.3MB/246.3MB 13:43:30 397a918c7da3 Pull complete 13:43:30 a3ab11953ef9 Extracting [> ] 426kB/39.52MB 13:43:30 a3ab11953ef9 Extracting [> ] 426kB/39.52MB 13:43:30 a3ab11953ef9 Extracting [=================> ] 13.63MB/39.52MB 13:43:30 a3ab11953ef9 Extracting [=================> ] 13.63MB/39.52MB 13:43:31 a3ab11953ef9 Extracting [================================> ] 25.99MB/39.52MB 13:43:31 a3ab11953ef9 Extracting [================================> ] 25.99MB/39.52MB 13:43:31 a3ab11953ef9 Extracting [==============================================> ] 37.06MB/39.52MB 13:43:31 a3ab11953ef9 Extracting [==============================================> ] 37.06MB/39.52MB 13:43:31 a3ab11953ef9 Extracting [==================================================>] 39.52MB/39.52MB 13:43:31 a3ab11953ef9 Extracting [==================================================>] 39.52MB/39.52MB 13:43:32 af860903a445 Extracting [==================================================>] 1.226kB/1.226kB 13:43:32 af860903a445 Extracting [==================================================>] 1.226kB/1.226kB 13:43:33 c9bd119720e4 Pull complete 13:43:34 806be17e856d Extracting [> ] 557.1kB/89.72MB 13:43:34 806be17e856d Extracting [==> ] 5.014MB/89.72MB 13:43:34 806be17e856d Extracting [=====> ] 10.03MB/89.72MB 13:43:34 806be17e856d Extracting [========> ] 15.6MB/89.72MB 13:43:34 806be17e856d Extracting [============> ] 22.28MB/89.72MB 13:43:35 806be17e856d Extracting [==============> ] 26.74MB/89.72MB 13:43:35 806be17e856d Extracting [=================> ] 31.2MB/89.72MB 13:43:35 5fbafe078afc Extracting [==================================================>] 1.44kB/1.44kB 13:43:35 5fbafe078afc Extracting [==================================================>] 1.44kB/1.44kB 13:43:35 806be17e856d Extracting [==================> ] 32.31MB/89.72MB 13:43:35 806be17e856d Extracting [====================> ] 36.21MB/89.72MB 13:43:35 a3ab11953ef9 Pull complete 13:43:35 a3ab11953ef9 Pull complete 13:43:35 806be17e856d Extracting [======================> ] 41.22MB/89.72MB 13:43:35 806be17e856d Extracting [=========================> ] 46.24MB/89.72MB 13:43:36 806be17e856d Extracting [==============================> ] 55.15MB/89.72MB 13:43:36 806be17e856d Extracting [==================================> ] 61.28MB/89.72MB 13:43:36 806be17e856d Extracting [=====================================> ] 67.96MB/89.72MB 13:43:36 806be17e856d Extracting [========================================> ] 71.86MB/89.72MB 13:43:36 806be17e856d Extracting [==========================================> ] 75.76MB/89.72MB 13:43:36 806be17e856d Extracting [=============================================> ] 82.44MB/89.72MB 13:43:36 806be17e856d Extracting [==============================================> ] 83MB/89.72MB 13:43:36 af860903a445 Pull complete 13:43:37 806be17e856d Extracting [==============================================> ] 83.56MB/89.72MB 13:43:37 806be17e856d Extracting [================================================> ] 86.34MB/89.72MB 13:43:37 806be17e856d Extracting [=================================================> ] 88.57MB/89.72MB 13:43:37 806be17e856d Extracting [=================================================> ] 89.13MB/89.72MB 13:43:37 806be17e856d Extracting [==================================================>] 89.72MB/89.72MB 13:43:40 5fbafe078afc Pull complete 13:43:40 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 13:43:40 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 13:43:40 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 13:43:40 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 13:43:43 806be17e856d Pull complete 13:43:43 7fb53fd2ae10 Extracting [===========> ] 32.77kB/138kB 13:43:43 7fb53fd2ae10 Extracting [==================================================>] 138kB/138kB 13:43:43 7fb53fd2ae10 Extracting [==================================================>] 138kB/138kB 13:43:44 apex-pdp Pulled 13:43:46 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 13:43:46 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 13:43:48 91ef9543149d Pull complete 13:43:48 91ef9543149d Pull complete 13:43:51 grafana Pulled 13:43:52 7fb53fd2ae10 Pull complete 13:43:52 2ec4f59af178 Extracting [==================================================>] 881B/881B 13:43:52 2ec4f59af178 Extracting [==================================================>] 881B/881B 13:43:52 2ec4f59af178 Extracting [==================================================>] 881B/881B 13:43:52 2ec4f59af178 Extracting [==================================================>] 881B/881B 13:43:56 634de6c90876 Pull complete 13:43:56 592798bd3683 Extracting [==================================================>] 100B/100B 13:43:56 592798bd3683 Extracting [==================================================>] 100B/100B 13:43:59 cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB 13:43:59 cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB 13:43:59 2ec4f59af178 Pull complete 13:43:59 2ec4f59af178 Pull complete 13:43:59 592798bd3683 Pull complete 13:43:59 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 13:43:59 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 13:43:59 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 13:43:59 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 13:43:59 473fdc983780 Extracting [==================================================>] 721B/721B 13:43:59 473fdc983780 Extracting [==================================================>] 721B/721B 13:44:00 cd00854cfb1a Pull complete 13:44:00 8b7e81cd5ef1 Pull complete 13:44:00 8b7e81cd5ef1 Pull complete 13:44:00 473fdc983780 Pull complete 13:44:00 c52916c1316e Extracting [==================================================>] 171B/171B 13:44:00 c52916c1316e Extracting [==================================================>] 171B/171B 13:44:00 c52916c1316e Extracting [==================================================>] 171B/171B 13:44:00 c52916c1316e Extracting [==================================================>] 171B/171B 13:44:00 mariadb Pulled 13:44:00 prometheus Pulled 13:44:00 c52916c1316e Pull complete 13:44:00 c52916c1316e Pull complete 13:44:00 7a1cb9ad7f75 Extracting [> ] 557.1kB/115.2MB 13:44:00 d93f69e96600 Extracting [> ] 557.1kB/115.2MB 13:44:00 7a1cb9ad7f75 Extracting [=====> ] 12.81MB/115.2MB 13:44:00 d93f69e96600 Extracting [====> ] 11.14MB/115.2MB 13:44:00 7a1cb9ad7f75 Extracting [============> ] 28.97MB/115.2MB 13:44:00 d93f69e96600 Extracting [==========> ] 23.95MB/115.2MB 13:44:01 7a1cb9ad7f75 Extracting [==================> ] 42.34MB/115.2MB 13:44:01 d93f69e96600 Extracting [================> ] 37.88MB/115.2MB 13:44:01 7a1cb9ad7f75 Extracting [=========================> ] 58.49MB/115.2MB 13:44:01 d93f69e96600 Extracting [=======================> ] 54.59MB/115.2MB 13:44:01 7a1cb9ad7f75 Extracting [================================> ] 74.65MB/115.2MB 13:44:01 d93f69e96600 Extracting [==============================> ] 69.63MB/115.2MB 13:44:01 7a1cb9ad7f75 Extracting [=======================================> ] 90.8MB/115.2MB 13:44:01 d93f69e96600 Extracting [=====================================> ] 86.34MB/115.2MB 13:44:01 7a1cb9ad7f75 Extracting [==============================================> ] 108.1MB/115.2MB 13:44:01 d93f69e96600 Extracting [===========================================> ] 100.8MB/115.2MB 13:44:01 d93f69e96600 Extracting [================================================> ] 111.4MB/115.2MB 13:44:01 7a1cb9ad7f75 Extracting [=================================================> ] 113.6MB/115.2MB 13:44:01 7a1cb9ad7f75 Extracting [==================================================>] 115.2MB/115.2MB 13:44:01 d93f69e96600 Extracting [==================================================>] 115.2MB/115.2MB 13:44:01 7a1cb9ad7f75 Pull complete 13:44:01 0a92c7dea7af Extracting [==================================================>] 3.449kB/3.449kB 13:44:01 0a92c7dea7af Extracting [==================================================>] 3.449kB/3.449kB 13:44:01 d93f69e96600 Pull complete 13:44:02 bbb9d15c45a1 Extracting [==================================================>] 3.633kB/3.633kB 13:44:02 bbb9d15c45a1 Extracting [==================================================>] 3.633kB/3.633kB 13:44:02 0a92c7dea7af Pull complete 13:44:02 bbb9d15c45a1 Pull complete 13:44:02 zookeeper Pulled 13:44:02 kafka Pulled 13:44:02 Network compose_default Creating 13:44:02 Network compose_default Created 13:44:02 Container zookeeper Creating 13:44:02 Container prometheus Creating 13:44:02 Container mariadb Creating 13:44:02 Container simulator Creating 13:44:17 Container simulator Created 13:44:17 Container prometheus Created 13:44:17 Container grafana Creating 13:44:17 Container mariadb Created 13:44:17 Container policy-db-migrator Creating 13:44:17 Container zookeeper Created 13:44:17 Container kafka Creating 13:44:17 Container grafana Created 13:44:17 Container policy-db-migrator Created 13:44:17 Container policy-api Creating 13:44:17 Container kafka Created 13:44:17 Container policy-api Created 13:44:17 Container policy-pap Creating 13:44:18 Container policy-pap Created 13:44:18 Container policy-apex-pdp Creating 13:44:18 Container policy-apex-pdp Created 13:44:18 Container simulator Starting 13:44:18 Container prometheus Starting 13:44:18 Container mariadb Starting 13:44:18 Container zookeeper Starting 13:44:20 Container simulator Started 13:44:21 Container zookeeper Started 13:44:21 Container kafka Starting 13:44:22 Container prometheus Started 13:44:22 Container grafana Starting 13:44:23 Container mariadb Started 13:44:23 Container policy-db-migrator Starting 13:44:24 Container policy-db-migrator Started 13:44:24 Container policy-api Starting 13:44:25 Container grafana Started 13:44:26 Container policy-api Started 13:44:26 Container kafka Started 13:44:26 Container policy-pap Starting 13:44:27 Container policy-pap Started 13:44:27 Container policy-apex-pdp Starting 13:44:28 Container policy-apex-pdp Started 13:44:28 Prometheus server: http://localhost:30259 13:44:28 Grafana server: http://localhost:30269 13:44:38 Waiting for REST to come up on localhost port 30003... 13:44:38 NAMES STATUS 13:44:38 policy-apex-pdp Up 10 seconds 13:44:38 policy-pap Up 10 seconds 13:44:38 policy-api Up 12 seconds 13:44:38 kafka Up 12 seconds 13:44:38 grafana Up 13 seconds 13:44:38 policy-db-migrator Up 14 seconds 13:44:38 zookeeper Up 17 seconds 13:44:38 simulator Up 18 seconds 13:44:38 prometheus Up 16 seconds 13:44:38 mariadb Up 15 seconds 13:44:43 NAMES STATUS 13:44:43 policy-apex-pdp Up 15 seconds 13:44:43 policy-pap Up 15 seconds 13:44:43 policy-api Up 17 seconds 13:44:43 kafka Up 17 seconds 13:44:43 grafana Up 18 seconds 13:44:43 policy-db-migrator Up 19 seconds 13:44:43 zookeeper Up 22 seconds 13:44:43 simulator Up 23 seconds 13:44:43 prometheus Up 21 seconds 13:44:43 mariadb Up 20 seconds 13:44:48 NAMES STATUS 13:44:48 policy-apex-pdp Up 20 seconds 13:44:48 policy-pap Up 20 seconds 13:44:48 policy-api Up 22 seconds 13:44:48 kafka Up 22 seconds 13:44:48 grafana Up 23 seconds 13:44:48 zookeeper Up 27 seconds 13:44:48 simulator Up 28 seconds 13:44:48 prometheus Up 26 seconds 13:44:48 mariadb Up 25 seconds 13:44:53 NAMES STATUS 13:44:53 policy-apex-pdp Up 25 seconds 13:44:53 policy-pap Up 26 seconds 13:44:53 policy-api Up 27 seconds 13:44:53 kafka Up 27 seconds 13:44:53 grafana Up 28 seconds 13:44:53 zookeeper Up 32 seconds 13:44:53 simulator Up 33 seconds 13:44:53 prometheus Up 31 seconds 13:44:53 mariadb Up 30 seconds 13:44:58 NAMES STATUS 13:44:58 policy-apex-pdp Up 30 seconds 13:44:58 policy-pap Up 31 seconds 13:44:58 policy-api Up 32 seconds 13:44:58 kafka Up 32 seconds 13:44:58 grafana Up 33 seconds 13:44:58 zookeeper Up 37 seconds 13:44:58 simulator Up 38 seconds 13:44:58 prometheus Up 36 seconds 13:44:58 mariadb Up 35 seconds 13:45:03 NAMES STATUS 13:45:03 policy-apex-pdp Up 35 seconds 13:45:03 policy-pap Up 36 seconds 13:45:03 policy-api Up 37 seconds 13:45:03 kafka Up 37 seconds 13:45:03 grafana Up 38 seconds 13:45:03 zookeeper Up 42 seconds 13:45:03 simulator Up 43 seconds 13:45:03 prometheus Up 41 seconds 13:45:03 mariadb Up 40 seconds 13:45:09 NAMES STATUS 13:45:09 policy-apex-pdp Up 40 seconds 13:45:09 policy-pap Up 41 seconds 13:45:09 policy-api Up 42 seconds 13:45:09 kafka Up 42 seconds 13:45:09 grafana Up 43 seconds 13:45:09 zookeeper Up 47 seconds 13:45:09 simulator Up 48 seconds 13:45:09 prometheus Up 46 seconds 13:45:09 mariadb Up 45 seconds 13:45:09 Waiting for REST to come up on localhost port 30001... 13:45:09 NAMES STATUS 13:45:09 policy-apex-pdp Up 40 seconds 13:45:09 policy-pap Up 41 seconds 13:45:09 policy-api Up 43 seconds 13:45:09 kafka Up 42 seconds 13:45:09 grafana Up 43 seconds 13:45:09 zookeeper Up 47 seconds 13:45:09 simulator Up 48 seconds 13:45:09 prometheus Up 46 seconds 13:45:09 mariadb Up 45 seconds 13:45:29 Build docker image for robot framework 13:45:29 Error: No such image: policy-csit-robot 13:45:29 Cloning into '/w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres/csit/resources/tests/models'... 13:45:30 Build robot framework docker image 13:45:30 Sending build context to Docker daemon 16.5MB 13:45:30 Step 1/9 : FROM nexus3.onap.org:10001/library/python:3.10-slim-bullseye 13:45:30 3.10-slim-bullseye: Pulling from library/python 13:45:30 76956b537f14: Pulling fs layer 13:45:30 f75f1b8a4051: Pulling fs layer 13:45:30 f9adc358e0b8: Pulling fs layer 13:45:30 f66e101ef41f: Pulling fs layer 13:45:30 b913137adf9e: Pulling fs layer 13:45:30 b913137adf9e: Waiting 13:45:30 f75f1b8a4051: Verifying Checksum 13:45:30 f75f1b8a4051: Download complete 13:45:30 f66e101ef41f: Verifying Checksum 13:45:30 f66e101ef41f: Download complete 13:45:30 b913137adf9e: Verifying Checksum 13:45:30 b913137adf9e: Download complete 13:45:30 f9adc358e0b8: Verifying Checksum 13:45:30 f9adc358e0b8: Download complete 13:45:31 76956b537f14: Verifying Checksum 13:45:31 76956b537f14: Download complete 13:45:32 76956b537f14: Pull complete 13:45:32 f75f1b8a4051: Pull complete 13:45:32 f9adc358e0b8: Pull complete 13:45:33 f66e101ef41f: Pull complete 13:45:33 b913137adf9e: Pull complete 13:45:33 Digest: sha256:fc8ba6002a477d6536097e9cc529c593cd6621a66c81e601b5353265afd10775 13:45:33 Status: Downloaded newer image for nexus3.onap.org:10001/library/python:3.10-slim-bullseye 13:45:33 ---> 08150e0479fc 13:45:33 Step 2/9 : ARG CSIT_SCRIPT=${CSIT_SCRIPT} 13:45:37 ---> Running in d8b6ef29eceb 13:45:37 Removing intermediate container d8b6ef29eceb 13:45:37 ---> 2ed785ba9366 13:45:37 Step 3/9 : ARG ROBOT_FILE=${ROBOT_FILE} 13:45:37 ---> Running in 9fc200a0f88e 13:45:37 Removing intermediate container 9fc200a0f88e 13:45:37 ---> 9caf65386ff6 13:45:37 Step 4/9 : ENV ROBOT_WORKSPACE=/opt/robotworkspace ROBOT_FILE=$ROBOT_FILE TEST_ENV=$TEST_ENV 13:45:37 ---> Running in 6e70ccc43a12 13:45:37 Removing intermediate container 6e70ccc43a12 13:45:37 ---> 98652f582a0b 13:45:37 Step 5/9 : RUN python3 -m pip -qq install --upgrade pip && python3 -m pip -qq install --upgrade --extra-index-url="https://nexus3.onap.org/repository/PyPi.staging/simple" 'robotframework-onap==0.6.0.*' --pre && python3 -m pip -qq install --upgrade confluent-kafka && python3 -m pip freeze 13:45:37 ---> Running in 54372a66f0b0 13:45:50 bcrypt==4.1.3 13:45:50 certifi==2024.6.2 13:45:50 cffi==1.17.0rc1 13:45:50 charset-normalizer==3.3.2 13:45:50 confluent-kafka==2.4.0 13:45:50 cryptography==42.0.8 13:45:50 decorator==5.1.1 13:45:50 deepdiff==7.0.1 13:45:50 dnspython==2.6.1 13:45:50 future==1.0.0 13:45:50 idna==3.7 13:45:50 Jinja2==3.1.4 13:45:50 jsonpath-rw==1.4.0 13:45:50 kafka-python==2.0.2 13:45:50 MarkupSafe==2.1.5 13:45:50 more-itertools==5.0.0 13:45:50 ordered-set==4.1.0 13:45:50 paramiko==3.4.0 13:45:50 pbr==6.0.0 13:45:50 ply==3.11 13:45:50 protobuf==5.27.2 13:45:50 pycparser==2.22 13:45:50 PyNaCl==1.5.0 13:45:50 PyYAML==6.0.2rc1 13:45:50 requests==2.32.3 13:45:50 robotframework==7.0.1 13:45:50 robotframework-onap==0.6.0.dev105 13:45:50 robotframework-requests==1.0a11 13:45:50 robotlibcore-temp==1.0.2 13:45:50 six==1.16.0 13:45:50 urllib3==2.2.2 13:45:56 Removing intermediate container 54372a66f0b0 13:45:56 ---> cdbc3b068fcd 13:45:56 Step 6/9 : RUN mkdir -p ${ROBOT_WORKSPACE} 13:45:56 ---> Running in c77a6791f0de 13:45:57 Removing intermediate container c77a6791f0de 13:45:57 ---> 314766378644 13:45:57 Step 7/9 : COPY scripts/run-test.sh tests/ ${ROBOT_WORKSPACE}/ 13:45:59 ---> 8ee9a8af7649 13:45:59 Step 8/9 : WORKDIR ${ROBOT_WORKSPACE} 13:45:59 ---> Running in d9e3233126e2 13:45:59 Removing intermediate container d9e3233126e2 13:45:59 ---> 901d56a0c676 13:45:59 Step 9/9 : CMD ["sh", "-c", "./run-test.sh" ] 13:45:59 ---> Running in 472b08012a66 13:45:59 Removing intermediate container 472b08012a66 13:45:59 ---> 3f1426eb4c29 13:45:59 Successfully built 3f1426eb4c29 13:45:59 Successfully tagged policy-csit-robot:latest 13:46:02 top - 13:46:02 up 5 min, 0 users, load average: 2.44, 2.29, 1.04 13:46:02 Tasks: 203 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 13:46:02 %Cpu(s): 10.1 us, 2.4 sy, 0.0 ni, 77.0 id, 10.4 wa, 0.0 hi, 0.1 si, 0.1 st 13:46:02 13:46:02 total used free shared buff/cache available 13:46:02 Mem: 31G 2.7G 22G 1.3M 6.5G 28G 13:46:02 Swap: 1.0G 0B 1.0G 13:46:02 13:46:02 NAMES STATUS 13:46:02 policy-apex-pdp Up About a minute 13:46:02 policy-pap Up About a minute 13:46:02 policy-api Up About a minute 13:46:02 kafka Up About a minute 13:46:02 grafana Up About a minute 13:46:02 zookeeper Up About a minute 13:46:02 simulator Up About a minute 13:46:02 prometheus Up About a minute 13:46:02 mariadb Up About a minute 13:46:02 13:46:04 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 13:46:04 541ab19078ea policy-apex-pdp 0.43% 183.8MiB / 31.41GiB 0.57% 37.5kB / 54.9kB 0B / 0B 50 13:46:04 4fe0fa13d438 policy-pap 0.65% 492MiB / 31.41GiB 1.53% 172kB / 193kB 0B / 149MB 65 13:46:04 230042e9b132 policy-api 0.11% 515.8MiB / 31.41GiB 1.60% 990kB / 674kB 0B / 0B 56 13:46:04 11a854b1c0d1 kafka 2.12% 392.4MiB / 31.41GiB 1.22% 165kB / 155kB 0B / 594kB 85 13:46:04 425d1c7d8dd7 grafana 0.05% 53.93MiB / 31.41GiB 0.17% 24kB / 4.63kB 0B / 26.1MB 18 13:46:04 299ac44ec8df zookeeper 0.10% 102MiB / 31.41GiB 0.32% 57.9kB / 52.5kB 98.3kB / 377kB 60 13:46:04 84c4886299ea simulator 0.07% 122.2MiB / 31.41GiB 0.38% 1.83kB / 0B 4.1kB / 0B 78 13:46:04 97c27a7eed0b prometheus 0.00% 18.86MiB / 31.41GiB 0.06% 71kB / 3.23kB 0B / 0B 13 13:46:04 f1186209b956 mariadb 0.02% 102.2MiB / 31.41GiB 0.32% 1.01MB / 1.26MB 11MB / 72.7MB 29 13:46:04 13:46:05 Container policy-csit Creating 13:46:05 Container policy-csit Created 13:46:05 Attaching to policy-csit 13:46:06 policy-csit | Invoking the robot tests from: apex-pdp-test.robot apex-slas.robot 13:46:06 policy-csit | Run Robot test 13:46:06 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 13:46:06 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 13:46:06 policy-csit | -v POLICY_API_IP:policy-api:6969 13:46:06 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 13:46:06 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 13:46:06 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 13:46:06 policy-csit | -v APEX_IP:policy-apex-pdp:6969 13:46:06 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 13:46:06 policy-csit | -v KAFKA_IP:kafka:9092 13:46:06 policy-csit | -v PROMETHEUS_IP:prometheus:9090 13:46:06 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 13:46:06 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 13:46:06 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 13:46:06 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 13:46:06 policy-csit | -v TEMP_FOLDER:/tmp/distribution 13:46:06 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 13:46:06 policy-csit | -v TEST_ENV:docker 13:46:06 policy-csit | -v JAEGER_IP:jaeger:16686 13:46:06 policy-csit | Starting Robot test suites ... 13:46:06 policy-csit | ============================================================================== 13:46:06 policy-csit | Apex-Pdp-Test & Apex-Slas 13:46:06 policy-csit | ============================================================================== 13:46:06 policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Pdp-Test 13:46:06 policy-csit | ============================================================================== 13:46:06 policy-csit | Healthcheck :: Runs Apex PDP Health check | PASS | 13:46:06 policy-csit | ------------------------------------------------------------------------------ 13:46:07 policy-csit | ExecuteApexSampleDomainPolicy | FAIL | 13:46:07 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 13:46:07 policy-csit | ------------------------------------------------------------------------------ 13:46:07 policy-csit | ExecuteApexTestPnfPolicy | FAIL | 13:46:07 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 13:46:07 policy-csit | ------------------------------------------------------------------------------ 13:46:08 policy-csit | ExecuteApexTestPnfPolicyWithMetadataSet | FAIL | 13:46:08 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 13:46:08 policy-csit | ------------------------------------------------------------------------------ 13:46:08 policy-csit | Metrics :: Verify policy-apex-pdp is exporting prometheus metrics | FAIL | 13:46:08 policy-csit | '# HELP jvm_classes_currently_loaded The number of classes that are currently loaded in the JVM 13:46:08 policy-csit | # TYPE jvm_classes_currently_loaded gauge 13:46:08 policy-csit | jvm_classes_currently_loaded 7527.0 13:46:08 policy-csit | # HELP jvm_classes_loaded_total The total number of classes that have been loaded since the JVM has started execution 13:46:08 policy-csit | # TYPE jvm_classes_loaded_total counter 13:46:08 policy-csit | jvm_classes_loaded_total 7527.0 13:46:08 policy-csit | # HELP jvm_classes_unloaded_total The total number of classes that have been unloaded since the JVM has started execution 13:46:08 policy-csit | # TYPE jvm_classes_unloaded_total counter 13:46:08 policy-csit | jvm_classes_unloaded_total 0.0 13:46:08 policy-csit | # HELP process_cpu_seconds_total Total user and system CPU time spent in seconds. 13:46:08 policy-csit | # TYPE process_cpu_seconds_total counter 13:46:08 policy-csit | process_cpu_seconds_total 7.53 13:46:08 policy-csit | # HELP process_start_time_seconds Start time of the process since unix epoch in seconds. 13:46:08 policy-csit | # TYPE process_start_time_seconds gauge 13:46:08 policy-csit | process_start_time_seconds 1.720014306687E9 13:46:08 policy-csit | [ Message content over the limit has been removed. ] 13:46:08 policy-csit | # TYPE pdpa_policy_deployments_total counter 13:46:08 policy-csit | # HELP jvm_memory_pool_allocated_bytes_created Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. 13:46:08 policy-csit | # TYPE jvm_memory_pool_allocated_bytes_created gauge 13:46:08 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'profiled nmethods'",} 1.720014307951E9 13:46:08 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Old Gen",} 1.720014307971E9 13:46:08 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Eden Space",} 1.720014307971E9 13:46:08 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-profiled nmethods'",} 1.720014307971E9 13:46:08 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Survivor Space",} 1.720014307971E9 13:46:08 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="Compressed Class Space",} 1.720014307971E9 13:46:08 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="Metaspace",} 1.720014307971E9 13:46:08 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-nmethods'",} 1.720014307971E9 13:46:08 policy-csit | ' does not contain 'pdpa_policy_deployments_total{operation="deploy",status="TOTAL",} 3.0' 13:46:08 policy-csit | ------------------------------------------------------------------------------ 13:46:08 policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Pdp-Test | FAIL | 13:46:08 policy-csit | 5 tests, 1 passed, 4 failed 13:46:08 policy-csit | ============================================================================== 13:46:08 policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Slas 13:46:08 policy-csit | ============================================================================== 13:46:08 policy-csit | Healthcheck :: Runs Apex PDP Health check | PASS | 13:46:08 policy-csit | ------------------------------------------------------------------------------ 13:46:08 policy-csit | ValidatePolicyExecutionAndEventRateLowComplexity :: Validate that ... | FAIL | 13:46:08 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 13:46:08 policy-csit | ------------------------------------------------------------------------------ 13:46:08 policy-csit | ValidatePolicyExecutionAndEventRateModerateComplexity :: Validate ... | FAIL | 13:46:08 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 13:46:08 policy-csit | ------------------------------------------------------------------------------ 13:46:09 policy-csit | ValidatePolicyExecutionAndEventRateHighComplexity :: Validate that... | FAIL | 13:46:09 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 13:46:09 policy-csit | ------------------------------------------------------------------------------ 13:47:09 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 13:47:09 policy-csit | ------------------------------------------------------------------------------ 13:47:09 policy-csit | ValidatePolicyExecutionTimes :: Validate policy execution times us... | FAIL | 13:47:09 policy-csit | Resolving variable '${resp['data']['result'][0]['value'][1]}' failed: IndexError: list index out of range 13:47:09 policy-csit | ------------------------------------------------------------------------------ 13:47:09 policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Slas | FAIL | 13:47:09 policy-csit | 6 tests, 2 passed, 4 failed 13:47:09 policy-csit | ============================================================================== 13:47:09 policy-csit | Apex-Pdp-Test & Apex-Slas | FAIL | 13:47:09 policy-csit | 11 tests, 3 passed, 8 failed 13:47:09 policy-csit | ============================================================================== 13:47:09 policy-csit | Output: /tmp/results/output.xml 13:47:09 policy-csit | Log: /tmp/results/log.html 13:47:09 policy-csit | Report: /tmp/results/report.html 13:47:09 policy-csit | RESULT: 8 13:47:09 policy-csit exited with code 8 13:47:09 NAMES STATUS 13:47:09 policy-apex-pdp Up 2 minutes 13:47:09 policy-pap Up 2 minutes 13:47:09 policy-api Up 2 minutes 13:47:09 kafka Up 2 minutes 13:47:09 grafana Up 2 minutes 13:47:09 zookeeper Up 2 minutes 13:47:09 simulator Up 2 minutes 13:47:09 prometheus Up 2 minutes 13:47:09 mariadb Up 2 minutes 13:47:09 Shut down started! 13:47:11 Collecting logs from docker compose containers... 13:47:15 ======== Logs from grafana ======== 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.657229694Z level=info msg="Starting Grafana" version=11.1.0 commit=5b85c4c2fcf5d32d4f68aaef345c53096359b2f1 branch=HEAD compiled=2024-07-03T13:44:25Z 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.657600339Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.65768235Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.657725611Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.657822792Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.657884293Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.657989355Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.658027625Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.658075196Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.658152097Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.658200738Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.658266369Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.658329289Z level=info msg=Target target=[all] 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.65839953Z level=info msg="Path Home" path=/usr/share/grafana 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.658436041Z level=info msg="Path Data" path=/var/lib/grafana 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.658490842Z level=info msg="Path Logs" path=/var/log/grafana 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.658585243Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.658614043Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 13:47:15 grafana | logger=settings t=2024-07-03T13:44:25.658692804Z level=info msg="App mode production" 13:47:15 grafana | logger=featuremgmt t=2024-07-03T13:44:25.659056029Z level=info msg=FeatureToggles annotationPermissionUpdate=true betterPageScrolling=true alertingNoDataErrorExecution=true nestedFolders=true topnav=true logRowsPopoverMenu=true panelMonitoring=true recordedQueriesMulti=true prometheusMetricEncyclopedia=true publicDashboards=true cloudWatchNewLabelParsing=true cloudWatchCrossAccountQuerying=true correlations=true dataplaneFrontendFallback=true alertingSimplifiedRouting=true kubernetesPlaylists=true ssoSettingsApi=true alertingInsights=true logsExploreTableVisualisation=true prometheusDataplane=true dashgpt=true prometheusConfigOverhaulAuth=true lokiMetricDataplane=true logsContextDatasourceUi=true transformationsRedesign=true recoveryThreshold=true exploreMetrics=true exploreContentOutline=true angularDeprecationUI=true logsInfiniteScrolling=true lokiQuerySplitting=true lokiQueryHints=true influxdbBackendMigration=true awsDatasourcesNewFormStyling=true lokiStructuredMetadata=true awsAsyncQueryCaching=true managedPluginsInstall=true 13:47:15 grafana | logger=sqlstore t=2024-07-03T13:44:25.659193171Z level=info msg="Connecting to DB" dbtype=sqlite3 13:47:15 grafana | logger=sqlstore t=2024-07-03T13:44:25.659349914Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:25.660930795Z level=info msg="Locking database" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:25.661000176Z level=info msg="Starting DB migrations" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:25.661688456Z level=info msg="Executing migration" id="create migration_log table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:25.66268465Z level=info msg="Migration successfully executed" id="create migration_log table" duration=995.694µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:25.670640961Z level=info msg="Executing migration" id="create user table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:25.672932972Z level=info msg="Migration successfully executed" id="create user table" duration=2.293981ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:25.682528106Z level=info msg="Executing migration" id="add unique index user.login" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:25.684103388Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.569962ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:25.854540076Z level=info msg="Executing migration" id="add unique index user.email" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:25.856197759Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.660323ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:25.895190841Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:25.896453809Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.264138ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.032226849Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.03356215Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.337751ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.251935291Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.257756942Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=5.822081ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.295768302Z level=info msg="Executing migration" id="create user table v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.297282326Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.513604ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.384042843Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.386170596Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=2.130803ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.475446263Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.478603032Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=3.161589ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.604861884Z level=info msg="Executing migration" id="copy data_source v1 to v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.605783018Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=922.914µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.621018625Z level=info msg="Executing migration" id="Drop old table user_v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.621676895Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=658.19µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.707710742Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.709724283Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=2.018991ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.750232702Z level=info msg="Executing migration" id="Update user table charset" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.750276443Z level=info msg="Migration successfully executed" id="Update user table charset" duration=45.401µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.97986493Z level=info msg="Executing migration" id="Add last_seen_at column to user" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:26.982744615Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=2.881905ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.057134103Z level=info msg="Executing migration" id="Add missing user data" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.057898773Z level=info msg="Migration successfully executed" id="Add missing user data" duration=779.871µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.083103181Z level=info msg="Executing migration" id="Add is_disabled column to user" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.084415508Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.314907ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.164825536Z level=info msg="Executing migration" id="Add index user.login/user.email" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.167178977Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=2.354861ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.400403047Z level=info msg="Executing migration" id="Add is_service_account column to user" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.403769201Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=3.367774ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.46812239Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.478227842Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=10.109312ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.553788437Z level=info msg="Executing migration" id="Add uid column to user" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.555487339Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.697102ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.601437628Z level=info msg="Executing migration" id="Update uid column values for users" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.602081837Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=652.389µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.663968853Z level=info msg="Executing migration" id="Add unique index user_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.665343801Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.396028ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.749443688Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.750298669Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=857.881µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.858875664Z level=info msg="Executing migration" id="update login and email fields to lowercase" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.859734685Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=859.891µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.870292463Z level=info msg="Executing migration" id="update login and email fields to lowercase2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.87083105Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=572.277µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.875822265Z level=info msg="Executing migration" id="create temp user table v1-7" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.877276994Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.454309ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.881592321Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.882698145Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.106124ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.889006197Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.889721296Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=710.589µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.895209118Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.895901767Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=692.479µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.902079538Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.902888268Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=809.04µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.911055705Z level=info msg="Executing migration" id="Update temp_user table charset" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.911091775Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=28.52µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.918940477Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.920891283Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.929905ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.926145081Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.927043853Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=900.682µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.960405578Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.96515938Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=4.754872ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.97279491Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.973339897Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=545.277µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.977392369Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.985455145Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=8.058195ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.989878992Z level=info msg="Executing migration" id="create temp_user v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.990846785Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=968.593µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.994786586Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:27.995329443Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=543.127µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.000841815Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.001355862Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=514.147µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.00884933Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.0094013Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=552.07µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.015996151Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.016853865Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=852.793µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.024136588Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.024463083Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=327.475µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.027670083Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.028088529Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=418.316µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.032380516Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.032817332Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=426.026µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.037418544Z level=info msg="Executing migration" id="create star table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.038142785Z level=info msg="Migration successfully executed" id="create star table" duration=723.661µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.041273283Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.042009335Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=735.872µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.044878819Z level=info msg="Executing migration" id="create org table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.046043857Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.164488ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.049040334Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.049722564Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=681.49µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.053610865Z level=info msg="Executing migration" id="create org_user table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.054311275Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=700.13µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.059591937Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.06108663Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.497333ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.065074602Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.065674232Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=600.599µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.068370043Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.069181886Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=810.873µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.074083992Z level=info msg="Executing migration" id="Update org table charset" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.074209314Z level=info msg="Migration successfully executed" id="Update org table charset" duration=127.982µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.07850253Z level=info msg="Executing migration" id="Update org_user table charset" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.078574231Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=74.711µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.082635074Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.083351985Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=717.171µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.086529425Z level=info msg="Executing migration" id="create dashboard table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.087406948Z level=info msg="Migration successfully executed" id="create dashboard table" duration=877.713µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.12427866Z level=info msg="Executing migration" id="add index dashboard.account_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.125843534Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.566894ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.130504626Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.131509832Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.004526ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.136919636Z level=info msg="Executing migration" id="create dashboard_tag table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.137810789Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=893.603µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.143726391Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.144749967Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.015656ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.150319344Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.151138186Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=818.362µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.154836454Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.163553899Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=8.717235ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.168963183Z level=info msg="Executing migration" id="create dashboard v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.169554242Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=590.819µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.17394523Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.174898395Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=953.085µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.181433627Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.182784358Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.350321ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.18869457Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.18937592Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=681.13µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.194221396Z level=info msg="Executing migration" id="drop table dashboard_v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.195275232Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.053637ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.200770557Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.20096366Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=195.053µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.206460916Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.208256373Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.790337ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.221457268Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.224503516Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.044408ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.229903419Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.232088314Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.186445ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.237524108Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.23828785Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=763.562µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.24153091Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.24345934Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.91588ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.249082427Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.251300392Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=2.220005ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.257130842Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.257918935Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=788.163µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.261788255Z level=info msg="Executing migration" id="Update dashboard table charset" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.261819105Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=31.48µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.267750447Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.267808258Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=58.701µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.27373557Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.277209164Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.472814ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.283863298Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.285506143Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.642925ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.289725269Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.292305619Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.57776ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.298436274Z level=info msg="Executing migration" id="Add column uid in dashboard" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.301299868Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.855194ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.307775589Z level=info msg="Executing migration" id="Update uid column values in dashboard" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.308192186Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=416.696µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.312188467Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.313259994Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.071787ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.317778775Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.31878707Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.010606ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.325326632Z level=info msg="Executing migration" id="Update dashboard title length" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.325358162Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=33.311µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.329795831Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.33105102Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.254849ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.335423028Z level=info msg="Executing migration" id="create dashboard_provisioning" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.33616577Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=739.912µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.342102732Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.34840024Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.297498ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.352644706Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.353411988Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=763.792µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.358584508Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.360465197Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.882199ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.367375145Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.368849477Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.473672ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.373849695Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.374643458Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=792.383µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.379154978Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.379919829Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=765.901µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.386447891Z level=info msg="Executing migration" id="Add check_sum column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.390458963Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=4.009612ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.396374735Z level=info msg="Executing migration" id="Add index for dashboard_title" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.39737517Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.000735ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.402230336Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.402674623Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=444.017µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.406039595Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.406477412Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=438.517µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.41345311Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.414688009Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.235849ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.41793236Z level=info msg="Executing migration" id="Add isPublic for dashboard" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.422854266Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=4.917976ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.430334353Z level=info msg="Executing migration" id="Add deleted for dashboard" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.432785041Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.456308ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.438371977Z level=info msg="Executing migration" id="Add index for deleted" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.439494985Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=1.123458ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.442190646Z level=info msg="Executing migration" id="create data_source table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.443669Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.480493ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.486399703Z level=info msg="Executing migration" id="add index data_source.account_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.488455305Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=2.057532ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.495247571Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.496438789Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.189199ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.502208759Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.503438808Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.227039ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.509682205Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.511230519Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.547674ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.524508795Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.53575219Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=11.241725ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.542315231Z level=info msg="Executing migration" id="create data_source table v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.543188645Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=871.644µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.550700352Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.554251067Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=3.549105ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.662243024Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.664320766Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=2.075242ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.67741229Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.678439966Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.031606ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.684111754Z level=info msg="Executing migration" id="Add column with_credentials" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.687411025Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.296861ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.693395828Z level=info msg="Executing migration" id="Add secure json data column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.696913943Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=3.515045ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.700786233Z level=info msg="Executing migration" id="Update data_source table charset" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.700840304Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=57.941µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.705652018Z level=info msg="Executing migration" id="Update initial version to 1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.706372199Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=708.751µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.712870141Z level=info msg="Executing migration" id="Add read_only data column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.71604259Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.17319ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.720739943Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.721447774Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=710.081µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.728233179Z level=info msg="Executing migration" id="Update json_data with nulls" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.729005571Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=774.742µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.735432211Z level=info msg="Executing migration" id="Add uid column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.738758602Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.326871ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.744922788Z level=info msg="Executing migration" id="Update uid value" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.745581439Z level=info msg="Migration successfully executed" id="Update uid value" duration=658.23µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.750245821Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.751787945Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.541984ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.756407737Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.758115383Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.707296ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.764009925Z level=info msg="Executing migration" id="Add is_prunable column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.76759981Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=3.590695ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.770495796Z level=info msg="Executing migration" id="Add api_version column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.774125812Z level=info msg="Migration successfully executed" id="Add api_version column" duration=3.630246ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.777519804Z level=info msg="Executing migration" id="create api_key table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.778557371Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.037787ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.7843166Z level=info msg="Executing migration" id="add index api_key.account_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.78621502Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.90864ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.790586887Z level=info msg="Executing migration" id="add index api_key.key" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.791979239Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.393442ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.796153994Z level=info msg="Executing migration" id="add index api_key.account_id_name" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.797728059Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.615555ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.804207759Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.805176964Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=969.235µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.810015969Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.810952504Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=936.285µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.815329512Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.816562851Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.232379ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.822253879Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.830743871Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.490272ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.88992417Z level=info msg="Executing migration" id="create api_key table v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.891572506Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=1.650286ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.897746302Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.899187974Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.442062ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.902526126Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.903510281Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=983.815µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.909432393Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.911866811Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=2.434108ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.917795703Z level=info msg="Executing migration" id="copy api_key v1 to v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.918310001Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=514.798µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.921901447Z level=info msg="Executing migration" id="Drop old table api_key_v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.922621068Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=723.361µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.928697962Z level=info msg="Executing migration" id="Update api_key table charset" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.928746173Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=49.781µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.934909249Z level=info msg="Executing migration" id="Add expires to api_key table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:28.940612648Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=5.758659ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.05993507Z level=info msg="Executing migration" id="Add service account foreign key" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.065294654Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=5.353724ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.07013043Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.070441845Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=311.045µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.076174575Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.078962499Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.787374ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.083793755Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.086984555Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=3.1903ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.092999419Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.093947844Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=947.495µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.098665758Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.0994108Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=745.092µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.102509499Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.103559855Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.042236ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.108455852Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.110641237Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=2.177894ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.116387617Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.117625246Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.237659ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.121770191Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.123048771Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.26748ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.128904613Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.129164398Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=259.764µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.134707175Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.134764196Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=59.03µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.141042724Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.146405228Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=5.362994ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.178010315Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.183096885Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=5.08814ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.186480788Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.186684051Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=203.043µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.191260753Z level=info msg="Executing migration" id="create quota table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.192158857Z level=info msg="Migration successfully executed" id="create quota table v1" duration=897.354µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.196893121Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.198477656Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.584455ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.202480309Z level=info msg="Executing migration" id="Update quota table charset" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.20253277Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=53.801µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.206539633Z level=info msg="Executing migration" id="create plugin_setting table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.20761614Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.076197ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.212348614Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.21338975Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.040896ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.247677789Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.253384919Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=5.70765ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.258849824Z level=info msg="Executing migration" id="Update plugin_setting table charset" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.258895625Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=47.951µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.262901828Z level=info msg="Executing migration" id="create session table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.264428992Z level=info msg="Migration successfully executed" id="create session table" duration=1.537184ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.268462206Z level=info msg="Executing migration" id="Drop old table playlist table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.268827131Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=364.215µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.272839904Z level=info msg="Executing migration" id="Drop old table playlist_item table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.273096539Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=219.314µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.277521918Z level=info msg="Executing migration" id="create playlist table v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.278472523Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=950.335µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.282277593Z level=info msg="Executing migration" id="create playlist item table v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.283130426Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=852.284µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.286704392Z level=info msg="Executing migration" id="Update playlist table charset" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.286732383Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=28.641µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.290557423Z level=info msg="Executing migration" id="Update playlist_item table charset" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.290606663Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=50.86µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.295400039Z level=info msg="Executing migration" id="Add playlist column created_at" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.300837704Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.428766ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.304949429Z level=info msg="Executing migration" id="Add playlist column updated_at" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.307405907Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.456188ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.311000954Z level=info msg="Executing migration" id="drop preferences table v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.311060194Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=59.57µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.31525286Z level=info msg="Executing migration" id="drop preferences table v3" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.315474134Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=220.804µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.319623629Z level=info msg="Executing migration" id="create preferences table v3" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.320638875Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.013316ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.324719789Z level=info msg="Executing migration" id="Update preferences table charset" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.32475935Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=40.801µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.329695847Z level=info msg="Executing migration" id="Add column team_id in preferences" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.33307853Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.382093ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.336438403Z level=info msg="Executing migration" id="Update team_id column values in preferences" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.336750598Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=312.015µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.34006061Z level=info msg="Executing migration" id="Add column week_start in preferences" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.343434273Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.373183ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.346662444Z level=info msg="Executing migration" id="Add column preferences.json_data" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.349945665Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.282651ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.354198452Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.354402866Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=204.363µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.357675017Z level=info msg="Executing migration" id="Add preferences index org_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.358806265Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.130968ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.362471002Z level=info msg="Executing migration" id="Add preferences index user_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.363519269Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.047737ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.36808892Z level=info msg="Executing migration" id="create alert table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.369245249Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.156309ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.372882086Z level=info msg="Executing migration" id="add index alert org_id & id " 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.373818781Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=936.495µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.377867234Z level=info msg="Executing migration" id="add index alert state" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.379216445Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.348361ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.392475274Z level=info msg="Executing migration" id="add index alert dashboard_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.395139945Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=2.684912ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.51956048Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.522026208Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=2.485729ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.533133213Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.535177325Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.999691ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.548547785Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.549402889Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=855.154µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.561050891Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.574179228Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=13.132647ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.606590187Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.608771171Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=2.194245ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.614927518Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.616844708Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.91643ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.621455751Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.622017579Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=561.238µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.626901466Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.627724789Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=823.353µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.63226293Z level=info msg="Executing migration" id="create alert_notification table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.633233705Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=970.885µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.63733848Z level=info msg="Executing migration" id="Add column is_default" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.643439666Z level=info msg="Migration successfully executed" id="Add column is_default" duration=6.103245ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.648604037Z level=info msg="Executing migration" id="Add column frequency" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.654195355Z level=info msg="Migration successfully executed" id="Add column frequency" duration=5.591648ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.658780767Z level=info msg="Executing migration" id="Add column send_reminder" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.662751119Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.969772ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.666443897Z level=info msg="Executing migration" id="Add column disable_resolve_message" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.671645899Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=5.200302ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.676033448Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.677527681Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.495163ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.682175344Z level=info msg="Executing migration" id="Update alert table charset" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.682202375Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=28.04µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.685853922Z level=info msg="Executing migration" id="Update alert_notification table charset" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.685884182Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=40.1µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.690080047Z level=info msg="Executing migration" id="create notification_journal table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.69150686Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.426603ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.696116432Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.697098308Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=981.755µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.701490057Z level=info msg="Executing migration" id="drop alert_notification_journal" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.702577794Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.081618ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.70615297Z level=info msg="Executing migration" id="create alert_notification_state table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.707057694Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=904.604µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.712388438Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.713353823Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=965.185µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.718215759Z level=info msg="Executing migration" id="Add for to alert table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.722220182Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.004253ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.726074913Z level=info msg="Executing migration" id="Add column uid in alert_notification" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.730126047Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.050974ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.736868322Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.737241078Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=380.526µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.742536411Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.743547117Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.010286ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.750649899Z level=info msg="Executing migration" id="Remove unique index org_id_name" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.751629634Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=979.365µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.756732974Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.761842135Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=5.109141ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.766714511Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.766912974Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=190.003µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.772171387Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.773134992Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=963.355µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.777384379Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.778314033Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=929.514µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.782642021Z level=info msg="Executing migration" id="Drop old annotation table v4" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.782851215Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=209.254µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.786071365Z level=info msg="Executing migration" id="create annotation table v5" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.786854667Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=777.992µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.789893175Z level=info msg="Executing migration" id="add index annotation 0 v3" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.790674637Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=781.462µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.794452357Z level=info msg="Executing migration" id="add index annotation 1 v3" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.795466643Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.014166ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.798872056Z level=info msg="Executing migration" id="add index annotation 2 v3" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.799889442Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.017506ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.803447318Z level=info msg="Executing migration" id="add index annotation 3 v3" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.804721098Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.273ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.809180768Z level=info msg="Executing migration" id="add index annotation 4 v3" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.810829174Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.647676ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.841517026Z level=info msg="Executing migration" id="Update annotation table charset" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.841592217Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=120.642µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.845089832Z level=info msg="Executing migration" id="Add column region_id to annotation table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.849748136Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.657954ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.854081074Z level=info msg="Executing migration" id="Drop category_id index" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.854991618Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=910.484µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.858191088Z level=info msg="Executing migration" id="Add column tags to annotation table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.862372424Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.186506ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.865386391Z level=info msg="Executing migration" id="Create annotation_tag table v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.866089962Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=703.181µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.870469931Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.87166817Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.198269ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.909735688Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.911219021Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.486193ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.915184683Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.950345224Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=35.156971ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.955713848Z level=info msg="Executing migration" id="Create annotation_tag table v3" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.957021429Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.320411ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.960789948Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.962400174Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.609626ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.966227324Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.96658969Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=362.016µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.970854377Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.971496237Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=641.51µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.975028672Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.975564841Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=537.039µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.979704126Z level=info msg="Executing migration" id="Add created time to annotation table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.985553698Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.848172ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.990317423Z level=info msg="Executing migration" id="Add updated time to annotation table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.995130459Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.810736ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:29.999349145Z level=info msg="Executing migration" id="Add index for created in annotation table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.000590835Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.24391ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.00411097Z level=info msg="Executing migration" id="Add index for updated in annotation table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.005344489Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.235239ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.009562854Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.00992714Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=365.516µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.01320644Z level=info msg="Executing migration" id="Add epoch_end column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.017436086Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.227536ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.021163683Z level=info msg="Executing migration" id="Add index for epoch_end" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.022871279Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.697166ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.028176611Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.02880577Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=684.25µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.032367305Z level=info msg="Executing migration" id="Move region to single row" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.032968134Z level=info msg="Migration successfully executed" id="Move region to single row" duration=601.509µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.036206814Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.037224279Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.017385ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.041443724Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.042359248Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=915.564µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.045621189Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.046796257Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.174408ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.050387612Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.051296506Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=908.834µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.055589612Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.056647988Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.058486ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.060337594Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.06137665Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.038466ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.064715432Z level=info msg="Executing migration" id="Increase tags column to length 4096" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.064836734Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=127.212µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.068626792Z level=info msg="Executing migration" id="create test_data table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.069494515Z level=info msg="Migration successfully executed" id="create test_data table" duration=867.333µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.072938768Z level=info msg="Executing migration" id="create dashboard_version table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.074280999Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.341741ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.077896684Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.079377477Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.480673ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.083882726Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.085138135Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.254339ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.088550178Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.089048255Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=497.497µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.092400567Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.093176889Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=775.432µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.096544321Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.096641412Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=96.511µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.100616833Z level=info msg="Executing migration" id="create team table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.102041825Z level=info msg="Migration successfully executed" id="create team table" duration=1.423922ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.105697331Z level=info msg="Executing migration" id="add index team.org_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.107321026Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.629905ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.111007483Z level=info msg="Executing migration" id="add unique index team_org_id_name" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.113285838Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=2.277665ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.117886398Z level=info msg="Executing migration" id="Add column uid in team" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.12388811Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=6.002342ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.127389634Z level=info msg="Executing migration" id="Update uid column values in team" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.127642288Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=252.104µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.130246078Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.131182052Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=935.594µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.135419018Z level=info msg="Executing migration" id="create team member table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.136325651Z level=info msg="Migration successfully executed" id="create team member table" duration=906.564µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.139689193Z level=info msg="Executing migration" id="add index team_member.org_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.14074933Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.060176ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.143923858Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.144877803Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=953.825µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.149024696Z level=info msg="Executing migration" id="add index team_member.team_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.150213745Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.187739ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.15381859Z level=info msg="Executing migration" id="Add column email to team table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.159875383Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=6.057323ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.163152744Z level=info msg="Executing migration" id="Add column external to team_member table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.168862251Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=5.708887ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.172978174Z level=info msg="Executing migration" id="Add column permission to team_member table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.177594885Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.616271ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.180673522Z level=info msg="Executing migration" id="create dashboard acl table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.181665598Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=991.816µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.184828306Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.185774061Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=945.675µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.189742322Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.191024182Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.28172ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.232167553Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.234253815Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=2.085932ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.238372258Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.239939212Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.566494ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.24435886Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.245049071Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=690.011µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.24821798Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.249751713Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.545653ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.253389219Z level=info msg="Executing migration" id="add index dashboard_permission" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.254918293Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.519564ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.365813326Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.366962643Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=1.150147ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.371075217Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.371626085Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=551.198µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.376638432Z level=info msg="Executing migration" id="create tag table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.377776909Z level=info msg="Migration successfully executed" id="create tag table" duration=1.139537ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.382068565Z level=info msg="Executing migration" id="add index tag.key_value" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.383257583Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.188558ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.387035772Z level=info msg="Executing migration" id="create login attempt table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.387804243Z level=info msg="Migration successfully executed" id="create login attempt table" duration=772.031µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.392245682Z level=info msg="Executing migration" id="add index login_attempt.username" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.393493511Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.247339ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.396994744Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.39802819Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.033376ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.401556894Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.415392767Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=13.836523ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.419606752Z level=info msg="Executing migration" id="create login_attempt v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.420296752Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=688.55µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.423556022Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.42469843Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.156798ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.428063902Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.428430397Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=366.255µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.431829689Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.432323147Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=492.968µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.436006694Z level=info msg="Executing migration" id="create user auth table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.436568952Z level=info msg="Migration successfully executed" id="create user auth table" duration=562.048µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.440044266Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.44160058Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.555814ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.446124939Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.446328952Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=203.913µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.450219152Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.458837574Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.618682ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.462141005Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.468087706Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.946791ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.471763133Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.477241907Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.477794ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.481804127Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.485523984Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.713887ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.488723933Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.489453754Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=729.731µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.493145821Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.49831802Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.171519ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.502980572Z level=info msg="Executing migration" id="create server_lock table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.504039618Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.052326ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.507615153Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.508640459Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.025196ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.511664956Z level=info msg="Executing migration" id="create user auth token table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.51263348Z level=info msg="Migration successfully executed" id="create user auth token table" duration=968.285µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.516623822Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.517652097Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.027976ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.520900287Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.522177917Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.27578ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.52566139Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.527302116Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.641036ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.531684783Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.537166437Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.481684ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.540368876Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.541064997Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=696.041µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.544240006Z level=info msg="Executing migration" id="create cache_data table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.544879576Z level=info msg="Migration successfully executed" id="create cache_data table" duration=639.15µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.549376225Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.553074692Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=3.679816ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.558552855Z level=info msg="Executing migration" id="create short_url table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.559400728Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=853.483µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.562370524Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.563071155Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=700.591µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.567570744Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.567762687Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=195.033µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.571512945Z level=info msg="Executing migration" id="delete alert_definition table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.571707038Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=193.983µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.574510171Z level=info msg="Executing migration" id="recreate alert_definition table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.575632758Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.122497ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.639187944Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.640973141Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.787867ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.644394584Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.645573762Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.180248ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.648776411Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.648848862Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=73.251µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.651863428Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.652932895Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.069407ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.657041868Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.658150175Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.109527ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.660934598Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.662156147Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.221629ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.667425248Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.668662947Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.239499ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.671412679Z level=info msg="Executing migration" id="Add column paused in alert_definition" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.678051771Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.638572ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.683747348Z level=info msg="Executing migration" id="drop alert_definition table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.685224961Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.478253ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.832550423Z level=info msg="Executing migration" id="delete alert_definition_version table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.832789237Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=243.044µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.839220246Z level=info msg="Executing migration" id="recreate alert_definition_version table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.840971113Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.752497ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.845277939Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.846588329Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.31053ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.851133009Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.852224715Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.091196ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.856578352Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.856681624Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=105.392µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.859553648Z level=info msg="Executing migration" id="drop alert_definition_version table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.86033055Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=776.732µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.865332217Z level=info msg="Executing migration" id="create alert_instance table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.867194775Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.859398ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.870643718Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.872272903Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.628425ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.877894899Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.880134904Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=2.244125ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.883899832Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.892062867Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=8.174265ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.895016153Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.895770944Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=754.571µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.900001289Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.900676719Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=675.48µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.90334301Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.928344804Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=24.999674ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.932185623Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.954908852Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=22.729689ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.959596374Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.960277835Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=681.391µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.962407538Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.963665957Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.257099ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.966869516Z level=info msg="Executing migration" id="add current_reason column related to current_state" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.973980465Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=7.111559ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.978143029Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.983638374Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.494775ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.986798822Z level=info msg="Executing migration" id="create alert_rule table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.987727346Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=928.344µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.991138558Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:30.992066793Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=922.355µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.047145949Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.048631592Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.485223ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.051828162Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.053096812Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.27892ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.056295702Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.056365783Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=70.241µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.0619352Z level=info msg="Executing migration" id="add column for to alert_rule" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.071213694Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=9.280084ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.073900976Z level=info msg="Executing migration" id="add column annotations to alert_rule" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.078236224Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.335068ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.081254251Z level=info msg="Executing migration" id="add column labels to alert_rule" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.087248115Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.993694ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.091736015Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.092642059Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=906.004µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.095332921Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.096301156Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=967.885µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.09910656Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.105333947Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.227147ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.11060734Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.119066491Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=8.446931ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.122563176Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.123722684Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.158718ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.126631069Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.132641523Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.010474ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.137299246Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.144030841Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.731685ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.146785324Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.146882516Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=97.582µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.149910793Z level=info msg="Executing migration" id="create alert_rule_version table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.151049571Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.138518ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.155600162Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.156554447Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=954.385µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.159725446Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.160772843Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.047787ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.163904911Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.163966342Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=61.801µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.487098116Z level=info msg="Executing migration" id="add column for to alert_rule_version" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.493618877Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.523001ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.497127782Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.507121388Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=9.997366ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.509908671Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.514390001Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.48128ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.519211987Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.526856716Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=7.646929ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.531197164Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.537630464Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.42752ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.540463059Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.540520359Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=57.39µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.543607508Z level=info msg="Executing migration" id=create_alert_configuration_table 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.544185097Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=577.199µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.548434323Z level=info msg="Executing migration" id="Add column default in alert_configuration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.559493395Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=11.058232ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.562223648Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.562269049Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=45.721µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.565055372Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.570215503Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=5.164241ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.57453832Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.575485965Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=946.975µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.578445231Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.58476327Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.317619ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.587766597Z level=info msg="Executing migration" id=create_ngalert_configuration_table 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.588547559Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=783.682µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.593687379Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.594696535Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.008806ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.597739922Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.604225944Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.479871ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.659928273Z level=info msg="Executing migration" id="create provenance_type table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.661282954Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.354511ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.775471786Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:31.776574803Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.105377ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.317934268Z level=info msg="Executing migration" id="create alert_image table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.319030035Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.098137ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.323779449Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.324749204Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=969.695µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.33024708Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.330331051Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=84.811µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.334937533Z level=info msg="Executing migration" id=create_alert_configuration_history_table 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.33601879Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.081487ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.338820013Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.339732688Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=909.315µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.344157187Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.344533963Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.348344382Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.348736398Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=391.786µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.353074536Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.355234069Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=2.159003ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.360278928Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.368141211Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=7.862693ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.372117303Z level=info msg="Executing migration" id="create library_element table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.374302277Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=2.168023ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.377949334Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.378806267Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=855.634µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.650462373Z level=info msg="Executing migration" id="create library_element_connection table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.652423023Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.96406ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.656364455Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.657635704Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.271189ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.661127439Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.662133364Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.005565ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.666650545Z level=info msg="Executing migration" id="increase max description length to 2048" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.666679035Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=29.25µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.671029373Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.671098624Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=69.631µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.675212939Z level=info msg="Executing migration" id="add library_element folder uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.682092616Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=6.879507ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.686322222Z level=info msg="Executing migration" id="populate library_element folder_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.686692308Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=370.415µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.691525903Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.692743192Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.216069ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.858650048Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:32.859509092Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=861.394µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.028290003Z level=info msg="Executing migration" id="create data_keys table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.030741781Z level=info msg="Migration successfully executed" id="create data_keys table" duration=2.455608ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.087336123Z level=info msg="Executing migration" id="create secrets table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.088849717Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.513804ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.09353269Z level=info msg="Executing migration" id="rename data_keys name column to id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.12374417Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=30.20856ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.156399169Z level=info msg="Executing migration" id="add name column into data_keys" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.166851762Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=10.454204ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.195310975Z level=info msg="Executing migration" id="copy data_keys id column values into name" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.19565431Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=347.085µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.217108175Z level=info msg="Executing migration" id="rename data_keys name column to label" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.253568283Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=36.455648ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.25664332Z level=info msg="Executing migration" id="rename data_keys id column back to name" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.286942991Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=30.299101ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.290139991Z level=info msg="Executing migration" id="create kv_store table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.290829232Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=689.061µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.296048363Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.297472195Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.423142ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.394018679Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.394390205Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=372.596µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.398358297Z level=info msg="Executing migration" id="create permission table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.39986002Z level=info msg="Migration successfully executed" id="create permission table" duration=1.502433ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.406510534Z level=info msg="Executing migration" id="add unique index permission.role_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.407504909Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=994.375µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.410862482Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.412643579Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.851019ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.415772898Z level=info msg="Executing migration" id="create role table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.417548985Z level=info msg="Migration successfully executed" id="create role table" duration=1.774987ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.423064552Z level=info msg="Executing migration" id="add column display_name" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.431380421Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.316889ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.434638642Z level=info msg="Executing migration" id="add column group_name" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.440144368Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.505216ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.443331697Z level=info msg="Executing migration" id="add index role.org_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.444340253Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.008666ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.450082173Z level=info msg="Executing migration" id="add unique index role_org_id_name" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.451483634Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.400922ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.455476636Z level=info msg="Executing migration" id="add index role_org_id_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.456538513Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.061447ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.459450668Z level=info msg="Executing migration" id="create team role table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.460342862Z level=info msg="Migration successfully executed" id="create team role table" duration=892.144µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.466231934Z level=info msg="Executing migration" id="add index team_role.org_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.46789551Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.663526ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.47110137Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.473350415Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=2.249525ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.481283718Z level=info msg="Executing migration" id="add index team_role.team_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.48395531Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=2.670942ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.48973722Z level=info msg="Executing migration" id="create user role table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.490768976Z level=info msg="Migration successfully executed" id="create user role table" duration=1.031856ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.498225432Z level=info msg="Executing migration" id="add index user_role.org_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.499685705Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.459933ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.506689744Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.508260719Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.570935ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.516499687Z level=info msg="Executing migration" id="add index user_role.user_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.517634015Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.135248ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.527260894Z level=info msg="Executing migration" id="create builtin role table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.528730067Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.468873ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.539448724Z level=info msg="Executing migration" id="add index builtin_role.role_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.542207987Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=2.756093ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.552080481Z level=info msg="Executing migration" id="add index builtin_role.name" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.553191108Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.110257ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.559422746Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.573650517Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=14.229562ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.584383625Z level=info msg="Executing migration" id="add index builtin_role.org_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.586509288Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=2.123004ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.596580205Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.598141489Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.561065ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.60462407Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.608326307Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=3.701417ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.619331999Z level=info msg="Executing migration" id="add unique index role.uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.620727651Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.372671ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.625602526Z level=info msg="Executing migration" id="create seed assignment table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.626650883Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.048547ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.634843221Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.636863632Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=2.022332ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.644978118Z level=info msg="Executing migration" id="add column hidden to role table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.655841538Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=10.861819ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.667829544Z level=info msg="Executing migration" id="permission kind migration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.676105223Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.276489ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.681192203Z level=info msg="Executing migration" id="permission attribute migration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.692050822Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=10.854229ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.700862559Z level=info msg="Executing migration" id="permission identifier migration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.71248421Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=11.620511ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.720438314Z level=info msg="Executing migration" id="add permission identifier index" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.721544601Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.103717ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.732224627Z level=info msg="Executing migration" id="add permission action scope role_id index" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.733527707Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.30594ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.777165187Z level=info msg="Executing migration" id="remove permission role_id action scope index" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.778510778Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.348361ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.787480428Z level=info msg="Executing migration" id="create query_history table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.788500044Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.020326ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.792485506Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.794502727Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=2.017351ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.80367973Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.803865543Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=186.393µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.815769559Z level=info msg="Executing migration" id="rbac disabled migrator" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.815824609Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=58.64µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.825379698Z level=info msg="Executing migration" id="teams permissions migration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.826072839Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=696.351µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.83766667Z level=info msg="Executing migration" id="dashboard permissions" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.838618725Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=955.235µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.846420156Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.847416362Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=999.676µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.851773399Z level=info msg="Executing migration" id="drop managed folder create actions" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.852041174Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=270.925µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.862419965Z level=info msg="Executing migration" id="alerting notification permissions" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.863439171Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=1.024046ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.873842243Z level=info msg="Executing migration" id="create query_history_star table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.875749043Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.90744ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.886052324Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.887672509Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.621426ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.895587942Z level=info msg="Executing migration" id="add column org_id in query_history_star" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.907483207Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=11.892175ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.915034485Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.915289279Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=254.124µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.922741085Z level=info msg="Executing migration" id="create correlation table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.924146967Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.407242ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.934130292Z level=info msg="Executing migration" id="add index correlations.uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.938382218Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=4.251966ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.948533087Z level=info msg="Executing migration" id="add index correlations.source_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.950184522Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.658566ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.959932384Z level=info msg="Executing migration" id="add correlation config column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.973305042Z level=info msg="Migration successfully executed" id="add correlation config column" duration=13.375468ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.982273862Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.984232603Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.959611ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.99112252Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:33.992879957Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.758477ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.005536845Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.033986918Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=28.422972ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.042455119Z level=info msg="Executing migration" id="create correlation v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.045870702Z level=info msg="Migration successfully executed" id="create correlation v2" duration=3.415983ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.059069578Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.060582191Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.515573ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.069455449Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.073157537Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=3.702388ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.080255677Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.081425285Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.169398ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.087552861Z level=info msg="Executing migration" id="copy correlation v1 to v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.088041169Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=489.537µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.096841925Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.098671424Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.825649ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.179256468Z level=info msg="Executing migration" id="add provisioning column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.185548766Z level=info msg="Migration successfully executed" id="add provisioning column" duration=6.296189ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.200337526Z level=info msg="Executing migration" id="create entity_events table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.201770118Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.433462ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.211326857Z level=info msg="Executing migration" id="create dashboard public config v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.213021263Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.695056ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.220948657Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.221365703Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.228076658Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.228944791Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.237532485Z level=info msg="Executing migration" id="Drop old dashboard public config table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.238790644Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.260219ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.249080924Z level=info msg="Executing migration" id="recreate dashboard public config v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.251596003Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=2.517199ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.261449367Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.262760247Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.31237ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.271682676Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.273732038Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=2.050742ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.28027208Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.281492679Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.221429ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.287176177Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.289704107Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.53009ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.299295736Z level=info msg="Executing migration" id="Drop public config table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.300683067Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.389981ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.307218999Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.308656242Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.440263ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.317738883Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.318871951Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.135487ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.328121654Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.330490671Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.373507ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.340900773Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.342504258Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.603925ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.348450831Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.372519595Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.066414ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.384555053Z level=info msg="Executing migration" id="add annotations_enabled column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.396471878Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=11.916066ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.401185701Z level=info msg="Executing migration" id="add time_selection_enabled column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.410378214Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=9.188223ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.41779256Z level=info msg="Executing migration" id="delete orphaned public dashboards" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.41846313Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=672.13µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.423554029Z level=info msg="Executing migration" id="add share column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.433680487Z level=info msg="Migration successfully executed" id="add share column" duration=10.124918ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.441763743Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.442063807Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=301.064µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.448557058Z level=info msg="Executing migration" id="create file table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.451737608Z level=info msg="Migration successfully executed" id="create file table" duration=3.18107ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.460297841Z level=info msg="Executing migration" id="file table idx: path natural pk" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.461860985Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.562614ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.471049128Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.472628523Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.580815ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.476780888Z level=info msg="Executing migration" id="create file_meta table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.477856384Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.075006ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.486214934Z level=info msg="Executing migration" id="file table idx: path key" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.487729408Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.517404ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.492843558Z level=info msg="Executing migration" id="set path collation in file table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.493123232Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=282.135µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.600632195Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.60096976Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=336.055µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.609202018Z level=info msg="Executing migration" id="managed permissions migration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.610405457Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.204419ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.614727414Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.615240382Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=515.858µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.619321426Z level=info msg="Executing migration" id="RBAC action name migrator" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.62088041Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.557044ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.62475792Z level=info msg="Executing migration" id="Add UID column to playlist" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.634949169Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=10.185799ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.640568236Z level=info msg="Executing migration" id="Update uid column values in playlist" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.640935782Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=372.456µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.645014256Z level=info msg="Executing migration" id="Add index for uid in playlist" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.64656361Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.549254ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.704902547Z level=info msg="Executing migration" id="update group index for alert rules" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.706067366Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=1.169529ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.721420134Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.722091715Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=673.721µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.731852857Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.733007285Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=1.156148ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.742485242Z level=info msg="Executing migration" id="add action column to seed_assignment" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.755037998Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=12.549635ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.765652703Z level=info msg="Executing migration" id="add scope column to seed_assignment" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.777534488Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=11.876775ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.806465978Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.809302112Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=2.837645ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.814166648Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:34.896827734Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=82.656876ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.012653806Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.015021293Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.370657ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.019206608Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.020494348Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.28724ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.025611838Z level=info msg="Executing migration" id="add primary key to seed_assigment" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.050727628Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=25.11723ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.056421567Z level=info msg="Executing migration" id="add origin column to seed_assignment" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.066114428Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=9.691751ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.089705524Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.090444756Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=736.772µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.094319576Z level=info msg="Executing migration" id="prevent seeding OnCall access" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.094604021Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=284.635µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.100571323Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.100984389Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=412.626µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.104586286Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.105058793Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=473.047µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.108779361Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.109257838Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=477.527µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.112859764Z level=info msg="Executing migration" id="create folder table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.113939181Z level=info msg="Migration successfully executed" id="create folder table" duration=1.078467ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.120200988Z level=info msg="Executing migration" id="Add index for parent_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.12223669Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.035262ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.126659169Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.128212493Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.553704ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.135463286Z level=info msg="Executing migration" id="Update folder title length" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.135571387Z level=info msg="Migration successfully executed" id="Update folder title length" duration=108.721µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.142276542Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.144348374Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.071472ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.149788288Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.151392013Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.606065ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.163678134Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.165024725Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.345801ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.168457318Z level=info msg="Executing migration" id="Sync dashboard and folder table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.169324302Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=866.744µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.17308124Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.173432296Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=350.816µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.176778478Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.17887095Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=2.092982ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.184679711Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.185998041Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.31202ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.190095395Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.191350314Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.255079ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.19749942Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.198839701Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.340051ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.20394933Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.20583607Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.884879ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.211198763Z level=info msg="Executing migration" id="create anon_device table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.21229616Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.097887ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.217599022Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.219290949Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.690097ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.225712689Z level=info msg="Executing migration" id="add index anon_device.updated_at" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.227643428Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.93474ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.233617441Z level=info msg="Executing migration" id="create signing_key table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.234725549Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.107978ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.241587215Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.242868815Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.28088ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.245948683Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.247183682Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.236429ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.250773118Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.251153464Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=380.596µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.256086371Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.265392665Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.305494ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.268318601Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.2689125Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=594.589µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.271714714Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.271758505Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=43.92µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.273717735Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.274623569Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=905.564µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.279540005Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.279624627Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=85.902µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.28239968Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.284425471Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.024321ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.28755098Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.28881302Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.26179ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.292963314Z level=info msg="Executing migration" id="create sso_setting table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.294140372Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.169108ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.327662634Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.328486597Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=824.853µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.332098003Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.332444348Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=347.095µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.336928598Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.337639549Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=710.371µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.340649905Z level=info msg="Executing migration" id="create cloud_migration table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.34162124Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=971.335µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.345030844Z level=info msg="Executing migration" id="create cloud_migration_run table v1" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.346011169Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=980.165µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.374117975Z level=info msg="Executing migration" id="add stack_id column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.386079202Z level=info msg="Migration successfully executed" id="add stack_id column" duration=11.962057ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.410662124Z level=info msg="Executing migration" id="add region_slug column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.422040811Z level=info msg="Migration successfully executed" id="add region_slug column" duration=11.379346ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.425263291Z level=info msg="Executing migration" id="add cluster_slug column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.434593416Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=9.327515ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.439723905Z level=info msg="Executing migration" id="add migration uid column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.446329858Z level=info msg="Migration successfully executed" id="add migration uid column" duration=6.605473ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.449925024Z level=info msg="Executing migration" id="Update uid column values for migration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.450191208Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=263.934µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.453645892Z level=info msg="Executing migration" id="Add unique index migration_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.454944232Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.29802ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.458359885Z level=info msg="Executing migration" id="add migration run uid column" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.467776642Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=9.415667ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.473515951Z level=info msg="Executing migration" id="Update uid column values for migration run" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.473772225Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=255.664µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.477130527Z level=info msg="Executing migration" id="Add unique index migration_run_uid" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.478452388Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.321281ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.481975452Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.482117284Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=157.822µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.486476392Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.496529239Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=10.051837ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.502019654Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.511778116Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.758092ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.515398922Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.515790678Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=391.446µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.52106803Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.521364385Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=295.985µs 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.524705377Z level=info msg="Executing migration" id="add record column to alert_rule table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.537463065Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=12.758838ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.541321715Z level=info msg="Executing migration" id="add record column to alert_rule_version table" 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.549412131Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=8.089836ms 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.554100744Z level=info msg="migrations completed" performed=572 skipped=0 duration=9.892438039s 13:47:15 grafana | logger=migrator t=2024-07-03T13:44:35.554795654Z level=info msg="Unlocking database" 13:47:15 grafana | logger=sqlstore t=2024-07-03T13:44:35.574614283Z level=info msg="Created default admin" user=admin 13:47:15 grafana | logger=sqlstore t=2024-07-03T13:44:35.575035009Z level=info msg="Created default organization" 13:47:15 grafana | logger=secrets t=2024-07-03T13:44:35.579846304Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 13:47:15 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-07-03T13:44:35.629348534Z level=info msg="Restored cache from database" duration=475.878µs 13:47:15 grafana | logger=plugin.store t=2024-07-03T13:44:35.631239233Z level=info msg="Loading plugins..." 13:47:15 grafana | logger=plugins.registration t=2024-07-03T13:44:35.661753607Z level=error msg="Could not register plugin" pluginId=xychart error="plugin xychart is already registered" 13:47:15 grafana | logger=plugins.initialization t=2024-07-03T13:44:35.661814438Z level=error msg="Could not initialize plugin" pluginId=xychart error="plugin xychart is already registered" 13:47:15 grafana | logger=local.finder t=2024-07-03T13:44:35.66193843Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 13:47:15 grafana | logger=plugin.store t=2024-07-03T13:44:35.662015511Z level=info msg="Plugins loaded" count=54 duration=30.776518ms 13:47:15 grafana | logger=query_data t=2024-07-03T13:44:35.666083464Z level=info msg="Query Service initialization" 13:47:15 grafana | logger=live.push_http t=2024-07-03T13:44:35.670036096Z level=info msg="Live Push Gateway initialization" 13:47:15 grafana | logger=ngalert.notifier.alertmanager org=1 t=2024-07-03T13:44:35.681759788Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 13:47:15 grafana | logger=ngalert.state.manager t=2024-07-03T13:44:35.691593031Z level=info msg="Running in alternative execution of Error/NoData mode" 13:47:15 grafana | logger=infra.usagestats.collector t=2024-07-03T13:44:35.693565182Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 13:47:15 grafana | logger=provisioning.datasources t=2024-07-03T13:44:35.695337739Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 13:47:15 grafana | logger=provisioning.alerting t=2024-07-03T13:44:35.71534666Z level=info msg="starting to provision alerting" 13:47:15 grafana | logger=provisioning.alerting t=2024-07-03T13:44:35.715382431Z level=info msg="finished to provision alerting" 13:47:15 grafana | logger=ngalert.state.manager t=2024-07-03T13:44:35.716025621Z level=info msg="Warming state cache for startup" 13:47:15 grafana | logger=grafanaStorageLogger t=2024-07-03T13:44:35.71660719Z level=info msg="Storage starting" 13:47:15 grafana | logger=ngalert.state.manager t=2024-07-03T13:44:35.716693271Z level=info msg="State cache has been initialized" states=0 duration=666.74µs 13:47:15 grafana | logger=ngalert.multiorg.alertmanager t=2024-07-03T13:44:35.716737442Z level=info msg="Starting MultiOrg Alertmanager" 13:47:15 grafana | logger=ngalert.scheduler t=2024-07-03T13:44:35.716776803Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 13:47:15 grafana | logger=ticker t=2024-07-03T13:44:35.716838424Z level=info msg=starting first_tick=2024-07-03T13:44:40Z 13:47:15 grafana | logger=http.server t=2024-07-03T13:44:35.722538312Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 13:47:15 grafana | logger=provisioning.dashboard t=2024-07-03T13:44:35.790061532Z level=info msg="starting to provision dashboards" 13:47:15 grafana | logger=plugins.update.checker t=2024-07-03T13:44:35.793110779Z level=info msg="Update check succeeded" duration=77.22081ms 13:47:15 grafana | logger=grafana.update.checker t=2024-07-03T13:44:35.794918567Z level=info msg="Update check succeeded" duration=79.15597ms 13:47:15 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-07-03T13:44:35.822439035Z level=info msg="Patterns update finished" duration=103.997716ms 13:47:15 grafana | logger=sqlstore.transactions t=2024-07-03T13:44:35.853769512Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 13:47:15 grafana | logger=grafana-apiserver t=2024-07-03T13:44:35.97260759Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 13:47:15 grafana | logger=grafana-apiserver t=2024-07-03T13:44:35.973164378Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 13:47:15 grafana | logger=provisioning.dashboard t=2024-07-03T13:44:36.256320776Z level=info msg="finished to provision dashboards" 13:47:15 grafana | logger=infra.usagestats t=2024-07-03T13:46:21.727684873Z level=info msg="Usage stats are ready to report" 13:47:15 =================================== 13:47:15 ======== Logs from kafka ======== 13:47:15 kafka | ===> User 13:47:15 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 13:47:15 kafka | ===> Configuring ... 13:47:15 kafka | Running in Zookeeper mode... 13:47:15 kafka | ===> Running preflight checks ... 13:47:15 kafka | ===> Check if /var/lib/kafka/data is writable ... 13:47:15 kafka | ===> Check if Zookeeper is healthy ... 13:47:15 kafka | [2024-07-03 13:44:30,826] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,826] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,826] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,826] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,826] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,826] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,827] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,827] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,827] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,827] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,827] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,827] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,827] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,827] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,827] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,827] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,827] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,827] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,831] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:30,835] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 13:47:15 kafka | [2024-07-03 13:44:30,840] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 13:47:15 kafka | [2024-07-03 13:44:30,847] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 13:47:15 kafka | [2024-07-03 13:44:30,862] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) 13:47:15 kafka | [2024-07-03 13:44:30,863] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 13:47:15 kafka | [2024-07-03 13:44:30,870] INFO Socket connection established, initiating session, client: /172.17.0.6:52588, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) 13:47:15 kafka | [2024-07-03 13:44:30,905] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x1000003c9f10000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 13:47:15 kafka | [2024-07-03 13:44:31,037] INFO Session: 0x1000003c9f10000 closed (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:31,037] INFO EventThread shut down for session: 0x1000003c9f10000 (org.apache.zookeeper.ClientCnxn) 13:47:15 kafka | Using log4j config /etc/kafka/log4j.properties 13:47:15 kafka | ===> Launching ... 13:47:15 kafka | ===> Launching kafka ... 13:47:15 kafka | [2024-07-03 13:44:31,953] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 13:47:15 kafka | [2024-07-03 13:44:32,255] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 13:47:15 kafka | [2024-07-03 13:44:32,320] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 13:47:15 kafka | [2024-07-03 13:44:32,321] INFO starting (kafka.server.KafkaServer) 13:47:15 kafka | [2024-07-03 13:44:32,322] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 13:47:15 kafka | [2024-07-03 13:44:32,334] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 13:47:15 kafka | [2024-07-03 13:44:32,338] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,338] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,338] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,338] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,339] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,339] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,339] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,339] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,339] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,339] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,339] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,339] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,339] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,339] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,339] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,339] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,339] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,339] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,341] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@66746f57 (org.apache.zookeeper.ZooKeeper) 13:47:15 kafka | [2024-07-03 13:44:32,344] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 13:47:15 kafka | [2024-07-03 13:44:32,350] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 13:47:15 kafka | [2024-07-03 13:44:32,351] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 13:47:15 kafka | [2024-07-03 13:44:32,356] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) 13:47:15 kafka | [2024-07-03 13:44:32,363] INFO Socket connection established, initiating session, client: /172.17.0.6:52590, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) 13:47:15 kafka | [2024-07-03 13:44:32,372] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x1000003c9f10001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 13:47:15 kafka | [2024-07-03 13:44:32,380] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 13:47:15 kafka | [2024-07-03 13:44:33,392] INFO Cluster ID = ZzyuOxDrRguYnYKgclDwYw (kafka.server.KafkaServer) 13:47:15 kafka | [2024-07-03 13:44:33,397] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 13:47:15 kafka | [2024-07-03 13:44:33,452] INFO KafkaConfig values: 13:47:15 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 13:47:15 kafka | alter.config.policy.class.name = null 13:47:15 kafka | alter.log.dirs.replication.quota.window.num = 11 13:47:15 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 13:47:15 kafka | authorizer.class.name = 13:47:15 kafka | auto.create.topics.enable = true 13:47:15 kafka | auto.include.jmx.reporter = true 13:47:15 kafka | auto.leader.rebalance.enable = true 13:47:15 kafka | background.threads = 10 13:47:15 kafka | broker.heartbeat.interval.ms = 2000 13:47:15 kafka | broker.id = 1 13:47:15 kafka | broker.id.generation.enable = true 13:47:15 kafka | broker.rack = null 13:47:15 kafka | broker.session.timeout.ms = 9000 13:47:15 kafka | client.quota.callback.class = null 13:47:15 kafka | compression.type = producer 13:47:15 kafka | connection.failed.authentication.delay.ms = 100 13:47:15 kafka | connections.max.idle.ms = 600000 13:47:15 kafka | connections.max.reauth.ms = 0 13:47:15 kafka | control.plane.listener.name = null 13:47:15 kafka | controlled.shutdown.enable = true 13:47:15 kafka | controlled.shutdown.max.retries = 3 13:47:15 kafka | controlled.shutdown.retry.backoff.ms = 5000 13:47:15 kafka | controller.listener.names = null 13:47:15 kafka | controller.quorum.append.linger.ms = 25 13:47:15 kafka | controller.quorum.election.backoff.max.ms = 1000 13:47:15 kafka | controller.quorum.election.timeout.ms = 1000 13:47:15 kafka | controller.quorum.fetch.timeout.ms = 2000 13:47:15 kafka | controller.quorum.request.timeout.ms = 2000 13:47:15 kafka | controller.quorum.retry.backoff.ms = 20 13:47:15 kafka | controller.quorum.voters = [] 13:47:15 kafka | controller.quota.window.num = 11 13:47:15 kafka | controller.quota.window.size.seconds = 1 13:47:15 kafka | controller.socket.timeout.ms = 30000 13:47:15 kafka | create.topic.policy.class.name = null 13:47:15 kafka | default.replication.factor = 1 13:47:15 kafka | delegation.token.expiry.check.interval.ms = 3600000 13:47:15 kafka | delegation.token.expiry.time.ms = 86400000 13:47:15 kafka | delegation.token.master.key = null 13:47:15 kafka | delegation.token.max.lifetime.ms = 604800000 13:47:15 kafka | delegation.token.secret.key = null 13:47:15 kafka | delete.records.purgatory.purge.interval.requests = 1 13:47:15 kafka | delete.topic.enable = true 13:47:15 kafka | early.start.listeners = null 13:47:15 kafka | fetch.max.bytes = 57671680 13:47:15 kafka | fetch.purgatory.purge.interval.requests = 1000 13:47:15 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 13:47:15 kafka | group.consumer.heartbeat.interval.ms = 5000 13:47:15 kafka | group.consumer.max.heartbeat.interval.ms = 15000 13:47:15 kafka | group.consumer.max.session.timeout.ms = 60000 13:47:15 kafka | group.consumer.max.size = 2147483647 13:47:15 kafka | group.consumer.min.heartbeat.interval.ms = 5000 13:47:15 kafka | group.consumer.min.session.timeout.ms = 45000 13:47:15 kafka | group.consumer.session.timeout.ms = 45000 13:47:15 kafka | group.coordinator.new.enable = false 13:47:15 kafka | group.coordinator.threads = 1 13:47:15 kafka | group.initial.rebalance.delay.ms = 3000 13:47:15 kafka | group.max.session.timeout.ms = 1800000 13:47:15 kafka | group.max.size = 2147483647 13:47:15 kafka | group.min.session.timeout.ms = 6000 13:47:15 kafka | initial.broker.registration.timeout.ms = 60000 13:47:15 kafka | inter.broker.listener.name = PLAINTEXT 13:47:15 kafka | inter.broker.protocol.version = 3.6-IV2 13:47:15 kafka | kafka.metrics.polling.interval.secs = 10 13:47:15 kafka | kafka.metrics.reporters = [] 13:47:15 kafka | leader.imbalance.check.interval.seconds = 300 13:47:15 kafka | leader.imbalance.per.broker.percentage = 10 13:47:15 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 13:47:15 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 13:47:15 kafka | log.cleaner.backoff.ms = 15000 13:47:15 kafka | log.cleaner.dedupe.buffer.size = 134217728 13:47:15 kafka | log.cleaner.delete.retention.ms = 86400000 13:47:15 kafka | log.cleaner.enable = true 13:47:15 kafka | log.cleaner.io.buffer.load.factor = 0.9 13:47:15 kafka | log.cleaner.io.buffer.size = 524288 13:47:15 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 13:47:15 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 13:47:15 kafka | log.cleaner.min.cleanable.ratio = 0.5 13:47:15 kafka | log.cleaner.min.compaction.lag.ms = 0 13:47:15 kafka | log.cleaner.threads = 1 13:47:15 kafka | log.cleanup.policy = [delete] 13:47:15 kafka | log.dir = /tmp/kafka-logs 13:47:15 kafka | log.dirs = /var/lib/kafka/data 13:47:15 kafka | log.flush.interval.messages = 9223372036854775807 13:47:15 kafka | log.flush.interval.ms = null 13:47:15 kafka | log.flush.offset.checkpoint.interval.ms = 60000 13:47:15 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 13:47:15 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 13:47:15 kafka | log.index.interval.bytes = 4096 13:47:15 kafka | log.index.size.max.bytes = 10485760 13:47:15 kafka | log.local.retention.bytes = -2 13:47:15 kafka | log.local.retention.ms = -2 13:47:15 kafka | log.message.downconversion.enable = true 13:47:15 kafka | log.message.format.version = 3.0-IV1 13:47:15 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 13:47:15 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 13:47:15 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 13:47:15 kafka | log.message.timestamp.type = CreateTime 13:47:15 kafka | log.preallocate = false 13:47:15 kafka | log.retention.bytes = -1 13:47:15 kafka | log.retention.check.interval.ms = 300000 13:47:15 kafka | log.retention.hours = 168 13:47:15 kafka | log.retention.minutes = null 13:47:15 kafka | log.retention.ms = null 13:47:15 kafka | log.roll.hours = 168 13:47:15 kafka | log.roll.jitter.hours = 0 13:47:15 kafka | log.roll.jitter.ms = null 13:47:15 kafka | log.roll.ms = null 13:47:15 kafka | log.segment.bytes = 1073741824 13:47:15 kafka | log.segment.delete.delay.ms = 60000 13:47:15 kafka | max.connection.creation.rate = 2147483647 13:47:15 kafka | max.connections = 2147483647 13:47:15 kafka | max.connections.per.ip = 2147483647 13:47:15 kafka | max.connections.per.ip.overrides = 13:47:15 kafka | max.incremental.fetch.session.cache.slots = 1000 13:47:15 kafka | message.max.bytes = 1048588 13:47:15 kafka | metadata.log.dir = null 13:47:15 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 13:47:15 kafka | metadata.log.max.snapshot.interval.ms = 3600000 13:47:15 kafka | metadata.log.segment.bytes = 1073741824 13:47:15 kafka | metadata.log.segment.min.bytes = 8388608 13:47:15 kafka | metadata.log.segment.ms = 604800000 13:47:15 kafka | metadata.max.idle.interval.ms = 500 13:47:15 kafka | metadata.max.retention.bytes = 104857600 13:47:15 kafka | metadata.max.retention.ms = 604800000 13:47:15 kafka | metric.reporters = [] 13:47:15 kafka | metrics.num.samples = 2 13:47:15 kafka | metrics.recording.level = INFO 13:47:15 kafka | metrics.sample.window.ms = 30000 13:47:15 kafka | min.insync.replicas = 1 13:47:15 kafka | node.id = 1 13:47:15 kafka | num.io.threads = 8 13:47:15 kafka | num.network.threads = 3 13:47:15 kafka | num.partitions = 1 13:47:15 kafka | num.recovery.threads.per.data.dir = 1 13:47:15 kafka | num.replica.alter.log.dirs.threads = null 13:47:15 kafka | num.replica.fetchers = 1 13:47:15 kafka | offset.metadata.max.bytes = 4096 13:47:15 kafka | offsets.commit.required.acks = -1 13:47:15 kafka | offsets.commit.timeout.ms = 5000 13:47:15 kafka | offsets.load.buffer.size = 5242880 13:47:15 kafka | offsets.retention.check.interval.ms = 600000 13:47:15 kafka | offsets.retention.minutes = 10080 13:47:15 kafka | offsets.topic.compression.codec = 0 13:47:15 kafka | offsets.topic.num.partitions = 50 13:47:15 kafka | offsets.topic.replication.factor = 1 13:47:15 kafka | offsets.topic.segment.bytes = 104857600 13:47:15 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 13:47:15 kafka | password.encoder.iterations = 4096 13:47:15 kafka | password.encoder.key.length = 128 13:47:15 kafka | password.encoder.keyfactory.algorithm = null 13:47:15 kafka | password.encoder.old.secret = null 13:47:15 kafka | password.encoder.secret = null 13:47:15 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 13:47:15 kafka | process.roles = [] 13:47:15 kafka | producer.id.expiration.check.interval.ms = 600000 13:47:15 kafka | producer.id.expiration.ms = 86400000 13:47:15 kafka | producer.purgatory.purge.interval.requests = 1000 13:47:15 kafka | queued.max.request.bytes = -1 13:47:15 kafka | queued.max.requests = 500 13:47:15 kafka | quota.window.num = 11 13:47:15 kafka | quota.window.size.seconds = 1 13:47:15 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 13:47:15 kafka | remote.log.manager.task.interval.ms = 30000 13:47:15 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 13:47:15 kafka | remote.log.manager.task.retry.backoff.ms = 500 13:47:15 kafka | remote.log.manager.task.retry.jitter = 0.2 13:47:15 kafka | remote.log.manager.thread.pool.size = 10 13:47:15 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 13:47:15 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 13:47:15 kafka | remote.log.metadata.manager.class.path = null 13:47:15 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 13:47:15 kafka | remote.log.metadata.manager.listener.name = null 13:47:15 kafka | remote.log.reader.max.pending.tasks = 100 13:47:15 kafka | remote.log.reader.threads = 10 13:47:15 kafka | remote.log.storage.manager.class.name = null 13:47:15 kafka | remote.log.storage.manager.class.path = null 13:47:15 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 13:47:15 kafka | remote.log.storage.system.enable = false 13:47:15 kafka | replica.fetch.backoff.ms = 1000 13:47:15 kafka | replica.fetch.max.bytes = 1048576 13:47:15 kafka | replica.fetch.min.bytes = 1 13:47:15 kafka | replica.fetch.response.max.bytes = 10485760 13:47:15 kafka | replica.fetch.wait.max.ms = 500 13:47:15 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 13:47:15 kafka | replica.lag.time.max.ms = 30000 13:47:15 kafka | replica.selector.class = null 13:47:15 kafka | replica.socket.receive.buffer.bytes = 65536 13:47:15 kafka | replica.socket.timeout.ms = 30000 13:47:15 kafka | replication.quota.window.num = 11 13:47:15 kafka | replication.quota.window.size.seconds = 1 13:47:15 kafka | request.timeout.ms = 30000 13:47:15 kafka | reserved.broker.max.id = 1000 13:47:15 kafka | sasl.client.callback.handler.class = null 13:47:15 kafka | sasl.enabled.mechanisms = [GSSAPI] 13:47:15 kafka | sasl.jaas.config = null 13:47:15 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:47:15 kafka | sasl.kerberos.min.time.before.relogin = 60000 13:47:15 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 13:47:15 kafka | sasl.kerberos.service.name = null 13:47:15 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 13:47:15 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 13:47:15 kafka | sasl.login.callback.handler.class = null 13:47:15 kafka | sasl.login.class = null 13:47:15 kafka | sasl.login.connect.timeout.ms = null 13:47:15 kafka | sasl.login.read.timeout.ms = null 13:47:15 kafka | sasl.login.refresh.buffer.seconds = 300 13:47:15 kafka | sasl.login.refresh.min.period.seconds = 60 13:47:15 kafka | sasl.login.refresh.window.factor = 0.8 13:47:15 kafka | sasl.login.refresh.window.jitter = 0.05 13:47:15 kafka | sasl.login.retry.backoff.max.ms = 10000 13:47:15 kafka | sasl.login.retry.backoff.ms = 100 13:47:15 kafka | sasl.mechanism.controller.protocol = GSSAPI 13:47:15 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 13:47:15 kafka | sasl.oauthbearer.clock.skew.seconds = 30 13:47:15 kafka | sasl.oauthbearer.expected.audience = null 13:47:15 kafka | sasl.oauthbearer.expected.issuer = null 13:47:15 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:47:15 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:47:15 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:47:15 kafka | sasl.oauthbearer.jwks.endpoint.url = null 13:47:15 kafka | sasl.oauthbearer.scope.claim.name = scope 13:47:15 kafka | sasl.oauthbearer.sub.claim.name = sub 13:47:15 kafka | sasl.oauthbearer.token.endpoint.url = null 13:47:15 kafka | sasl.server.callback.handler.class = null 13:47:15 kafka | sasl.server.max.receive.size = 524288 13:47:15 kafka | security.inter.broker.protocol = PLAINTEXT 13:47:15 kafka | security.providers = null 13:47:15 kafka | server.max.startup.time.ms = 9223372036854775807 13:47:15 kafka | socket.connection.setup.timeout.max.ms = 30000 13:47:15 kafka | socket.connection.setup.timeout.ms = 10000 13:47:15 kafka | socket.listen.backlog.size = 50 13:47:15 kafka | socket.receive.buffer.bytes = 102400 13:47:15 kafka | socket.request.max.bytes = 104857600 13:47:15 kafka | socket.send.buffer.bytes = 102400 13:47:15 kafka | ssl.cipher.suites = [] 13:47:15 kafka | ssl.client.auth = none 13:47:15 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:47:15 kafka | ssl.endpoint.identification.algorithm = https 13:47:15 kafka | ssl.engine.factory.class = null 13:47:15 kafka | ssl.key.password = null 13:47:15 kafka | ssl.keymanager.algorithm = SunX509 13:47:15 kafka | ssl.keystore.certificate.chain = null 13:47:15 kafka | ssl.keystore.key = null 13:47:15 kafka | ssl.keystore.location = null 13:47:15 kafka | ssl.keystore.password = null 13:47:15 kafka | ssl.keystore.type = JKS 13:47:15 kafka | ssl.principal.mapping.rules = DEFAULT 13:47:15 kafka | ssl.protocol = TLSv1.3 13:47:15 kafka | ssl.provider = null 13:47:15 kafka | ssl.secure.random.implementation = null 13:47:15 kafka | ssl.trustmanager.algorithm = PKIX 13:47:15 kafka | ssl.truststore.certificates = null 13:47:15 kafka | ssl.truststore.location = null 13:47:15 kafka | ssl.truststore.password = null 13:47:15 kafka | ssl.truststore.type = JKS 13:47:15 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 13:47:15 kafka | transaction.max.timeout.ms = 900000 13:47:15 kafka | transaction.partition.verification.enable = true 13:47:15 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 13:47:15 kafka | transaction.state.log.load.buffer.size = 5242880 13:47:15 kafka | transaction.state.log.min.isr = 2 13:47:15 kafka | transaction.state.log.num.partitions = 50 13:47:15 kafka | transaction.state.log.replication.factor = 3 13:47:15 kafka | transaction.state.log.segment.bytes = 104857600 13:47:15 kafka | transactional.id.expiration.ms = 604800000 13:47:15 kafka | unclean.leader.election.enable = false 13:47:15 kafka | unstable.api.versions.enable = false 13:47:15 kafka | zookeeper.clientCnxnSocket = null 13:47:15 kafka | zookeeper.connect = zookeeper:2181 13:47:15 kafka | zookeeper.connection.timeout.ms = null 13:47:15 kafka | zookeeper.max.in.flight.requests = 10 13:47:15 kafka | zookeeper.metadata.migration.enable = false 13:47:15 kafka | zookeeper.metadata.migration.min.batch.size = 200 13:47:15 kafka | zookeeper.session.timeout.ms = 18000 13:47:15 kafka | zookeeper.set.acl = false 13:47:15 kafka | zookeeper.ssl.cipher.suites = null 13:47:15 kafka | zookeeper.ssl.client.enable = false 13:47:15 kafka | zookeeper.ssl.crl.enable = false 13:47:15 kafka | zookeeper.ssl.enabled.protocols = null 13:47:15 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 13:47:15 kafka | zookeeper.ssl.keystore.location = null 13:47:15 kafka | zookeeper.ssl.keystore.password = null 13:47:15 kafka | zookeeper.ssl.keystore.type = null 13:47:15 kafka | zookeeper.ssl.ocsp.enable = false 13:47:15 kafka | zookeeper.ssl.protocol = TLSv1.2 13:47:15 kafka | zookeeper.ssl.truststore.location = null 13:47:15 kafka | zookeeper.ssl.truststore.password = null 13:47:15 kafka | zookeeper.ssl.truststore.type = null 13:47:15 kafka | (kafka.server.KafkaConfig) 13:47:15 kafka | [2024-07-03 13:44:33,484] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 13:47:15 kafka | [2024-07-03 13:44:33,484] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 13:47:15 kafka | [2024-07-03 13:44:33,486] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 13:47:15 kafka | [2024-07-03 13:44:33,488] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 13:47:15 kafka | [2024-07-03 13:44:33,517] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:44:33,525] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:44:33,534] INFO Loaded 0 logs in 17ms (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:44:33,535] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:44:33,536] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:44:33,548] INFO Starting the log cleaner (kafka.log.LogCleaner) 13:47:15 kafka | [2024-07-03 13:44:33,590] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 13:47:15 kafka | [2024-07-03 13:44:33,604] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 13:47:15 kafka | [2024-07-03 13:44:33,618] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 13:47:15 kafka | [2024-07-03 13:44:33,657] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 13:47:15 kafka | [2024-07-03 13:44:33,972] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 13:47:15 kafka | [2024-07-03 13:44:33,991] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 13:47:15 kafka | [2024-07-03 13:44:33,992] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 13:47:15 kafka | [2024-07-03 13:44:33,998] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 13:47:15 kafka | [2024-07-03 13:44:34,002] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 13:47:15 kafka | [2024-07-03 13:44:34,026] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:47:15 kafka | [2024-07-03 13:44:34,028] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:47:15 kafka | [2024-07-03 13:44:34,029] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:47:15 kafka | [2024-07-03 13:44:34,030] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:47:15 kafka | [2024-07-03 13:44:34,030] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:47:15 kafka | [2024-07-03 13:44:34,046] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 13:47:15 kafka | [2024-07-03 13:44:34,047] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 13:47:15 kafka | [2024-07-03 13:44:34,069] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 13:47:15 kafka | [2024-07-03 13:44:34,093] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1720014274082,1720014274082,1,0,0,72057610310844417,258,0,27 13:47:15 kafka | (kafka.zk.KafkaZkClient) 13:47:15 kafka | [2024-07-03 13:44:34,094] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 13:47:15 kafka | [2024-07-03 13:44:34,217] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 13:47:15 kafka | [2024-07-03 13:44:34,230] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:47:15 kafka | [2024-07-03 13:44:34,237] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 13:47:15 kafka | [2024-07-03 13:44:34,240] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:47:15 kafka | [2024-07-03 13:44:34,242] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:47:15 kafka | [2024-07-03 13:44:34,246] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,250] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,257] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 13:47:15 kafka | [2024-07-03 13:44:34,266] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:44:34,271] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:44:34,290] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 13:47:15 kafka | [2024-07-03 13:44:34,290] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,293] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 13:47:15 kafka | [2024-07-03 13:44:34,297] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,298] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 13:47:15 kafka | [2024-07-03 13:44:34,300] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 13:47:15 kafka | [2024-07-03 13:44:34,301] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,304] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,323] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,330] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,336] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 13:47:15 kafka | [2024-07-03 13:44:34,344] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 13:47:15 kafka | [2024-07-03 13:44:34,346] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,346] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,346] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,346] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,350] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,350] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,350] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,350] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 13:47:15 kafka | [2024-07-03 13:44:34,351] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,354] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 13:47:15 kafka | [2024-07-03 13:44:34,356] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:47:15 kafka | [2024-07-03 13:44:34,361] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 13:47:15 kafka | [2024-07-03 13:44:34,362] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 13:47:15 kafka | [2024-07-03 13:44:34,364] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 13:47:15 kafka | [2024-07-03 13:44:34,364] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 13:47:15 kafka | [2024-07-03 13:44:34,364] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 13:47:15 kafka | [2024-07-03 13:44:34,365] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 13:47:15 kafka | [2024-07-03 13:44:34,367] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 13:47:15 kafka | [2024-07-03 13:44:34,367] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,370] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 13:47:15 kafka | [2024-07-03 13:44:34,371] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.6:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 13:47:15 kafka | [2024-07-03 13:44:34,372] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,372] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,372] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,373] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,373] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) 13:47:15 kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. 13:47:15 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 13:47:15 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) 13:47:15 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) 13:47:15 kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) 13:47:15 kafka | [2024-07-03 13:44:34,374] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,375] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) 13:47:15 kafka | [2024-07-03 13:44:34,385] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:34,396] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 13:47:15 kafka | [2024-07-03 13:44:34,419] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 13:47:15 kafka | [2024-07-03 13:44:34,422] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 13:47:15 kafka | [2024-07-03 13:44:34,426] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 13:47:15 kafka | [2024-07-03 13:44:34,435] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) 13:47:15 kafka | [2024-07-03 13:44:34,435] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) 13:47:15 kafka | [2024-07-03 13:44:34,435] INFO Kafka startTimeMs: 1720014274429 (org.apache.kafka.common.utils.AppInfoParser) 13:47:15 kafka | [2024-07-03 13:44:34,436] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 13:47:15 kafka | [2024-07-03 13:44:34,478] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 13:47:15 kafka | [2024-07-03 13:44:34,530] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:44:34,571] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 13:47:15 kafka | [2024-07-03 13:44:34,607] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 13:47:15 kafka | [2024-07-03 13:44:39,387] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:44:39,387] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:45:06,222] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 13:47:15 kafka | [2024-07-03 13:45:06,222] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:45:06,224] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 13:47:15 kafka | [2024-07-03 13:45:06,236] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:45:06,353] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(Suv4UYMeSie1YECc55SKoQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(MPVbNN_-QvC5pM4yJsju-Q),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:45:06,354] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 13:47:15 kafka | [2024-07-03 13:45:06,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,356] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,357] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,358] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,358] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,358] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,358] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,358] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,358] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,358] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,358] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,358] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,358] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,358] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,359] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,359] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,359] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,359] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,359] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,359] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,359] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,359] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,360] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,360] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,360] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,360] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,360] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,365] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,365] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,365] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,365] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,365] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,365] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,365] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,365] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,365] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,365] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,365] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,365] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,365] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,366] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,367] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,499] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,499] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,499] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,499] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,499] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,499] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,499] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,499] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,499] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,499] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,499] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,499] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,500] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,500] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,500] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,500] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,500] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,500] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,500] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,500] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,500] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,500] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,500] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,500] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,500] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,501] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,501] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,501] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,501] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,501] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,501] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,501] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,501] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,501] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,501] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,501] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,501] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,501] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,501] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,502] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,502] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,502] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,502] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,502] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,502] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,502] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,502] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,502] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,502] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,502] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,502] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,504] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,504] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,505] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,505] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,505] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,505] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,505] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,505] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,505] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,505] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,505] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,505] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,505] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,505] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,505] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,506] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,506] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,506] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,506] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,506] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,506] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,506] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,506] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,506] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,506] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,506] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,507] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,507] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,507] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,507] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,507] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,507] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,507] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,507] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,507] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,507] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,507] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,507] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,508] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,508] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,508] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,508] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,508] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,508] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,508] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,508] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,508] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,508] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,508] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,508] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,509] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,509] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,519] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,522] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,522] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,522] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,522] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,522] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,522] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,522] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,522] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,522] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,522] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,522] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,522] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,522] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,522] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,522] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,522] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,522] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,523] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,523] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,523] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,523] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,523] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,523] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,523] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,523] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,523] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,523] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,523] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,523] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,523] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,523] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,523] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,527] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,528] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,528] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,528] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,528] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,528] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,528] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,528] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,528] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,528] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,528] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,528] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,529] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,530] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,530] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,530] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,530] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,530] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,530] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,530] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,530] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,530] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,530] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,530] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,530] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,530] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,530] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,530] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,530] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,530] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,559] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,560] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,561] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 13:47:15 kafka | [2024-07-03 13:45:06,561] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,625] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,635] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,638] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,639] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,641] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,659] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,660] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,660] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,660] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,660] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,669] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,669] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,670] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,670] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,670] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,676] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,677] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,677] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,677] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,677] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,753] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,753] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,754] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,754] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,754] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,760] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,761] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,761] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,761] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,761] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,768] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,768] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,769] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,769] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,769] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,776] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,777] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,777] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,777] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,777] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,782] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,783] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,783] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,783] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,783] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,789] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,790] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,790] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,790] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,790] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,799] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,799] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,799] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,799] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,799] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,806] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,806] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,806] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,806] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,806] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,813] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,814] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,814] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,814] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,814] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,820] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,821] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,821] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,821] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,821] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,827] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,828] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,828] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,828] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,828] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,834] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,835] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,835] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,835] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,835] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,842] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,843] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,843] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,843] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,843] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,851] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,851] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,851] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,851] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,852] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,858] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,859] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,859] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,859] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,859] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,866] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,866] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,866] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,866] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,866] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,874] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,874] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,874] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,874] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,874] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,881] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,882] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,882] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,882] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,882] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,890] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,891] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,891] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,891] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,891] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,899] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,901] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,901] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,901] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,901] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,908] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,909] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,909] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,909] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,909] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,916] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,917] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,917] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,917] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,917] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,925] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,925] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,925] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,925] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,926] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,932] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,932] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,932] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,932] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,932] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,940] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,941] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,941] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,941] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,941] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,977] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,977] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,977] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,977] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,978] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,985] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,985] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,985] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,985] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,985] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:06,993] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:06,993] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:06,993] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,993] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:06,993] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,007] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,007] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,008] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,008] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,008] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,014] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,015] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,015] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,015] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,015] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,023] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,024] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,024] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,024] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,024] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(Suv4UYMeSie1YECc55SKoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,031] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,032] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,032] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,032] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,032] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,038] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,039] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,039] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,039] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,039] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,047] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,047] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,048] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,048] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,048] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,058] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,058] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,059] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,059] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,059] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,067] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,068] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,068] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,068] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,068] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,075] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,076] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,076] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,076] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,076] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,083] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,084] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,084] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,084] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,084] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,091] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,091] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,091] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,092] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,092] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,100] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,101] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,101] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,101] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,101] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,107] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,108] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,108] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,108] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,108] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,114] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,114] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,114] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,114] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,114] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,122] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,123] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,123] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,123] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,123] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,155] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,157] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,157] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,157] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,157] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,166] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,166] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,166] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,166] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,166] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,178] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,179] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,179] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,179] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,180] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,186] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:47:15 kafka | [2024-07-03 13:45:07,187] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:47:15 kafka | [2024-07-03 13:45:07,187] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,187] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 13:47:15 kafka | [2024-07-03 13:45:07,187] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(MPVbNN_-QvC5pM4yJsju-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,193] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,194] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,204] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,206] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,209] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,214] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,215] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,217] INFO [Broker id=1] Finished LeaderAndIsr request in 688ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,217] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,217] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,217] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,220] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,220] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,220] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,220] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,220] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,220] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,221] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,221] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,221] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,221] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,221] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,223] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,223] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=MPVbNN_-QvC5pM4yJsju-Q, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=Suv4UYMeSie1YECc55SKoQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,223] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,223] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,223] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,223] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,224] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,224] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,225] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,225] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,226] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,226] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,226] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,226] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,226] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,226] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,227] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 18 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,227] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:47:15 kafka | [2024-07-03 13:45:07,234] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,235] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,235] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,235] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,235] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,235] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,235] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,235] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,235] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,235] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,235] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,235] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,235] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,235] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,235] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,235] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,235] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,235] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,236] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,237] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,238] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 13:47:15 kafka | [2024-07-03 13:45:07,296] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-22adce61-2bf5-4ed8-8fb3-c45a45f44abb and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,308] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-22adce61-2bf5-4ed8-8fb3-c45a45f44abb with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,332] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 5c96c918-54eb-401a-98be-aaba56deddd0 in Empty state. Created a new member id consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3-742e9a4e-ecbb-4557-ba8f-8319bc1a4974 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:07,339] INFO [GroupCoordinator 1]: Preparing to rebalance group 5c96c918-54eb-401a-98be-aaba56deddd0 in state PreparingRebalance with old generation 0 (__consumer_offsets-37) (reason: Adding new member consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3-742e9a4e-ecbb-4557-ba8f-8319bc1a4974 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:08,053] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 0eeb2e6c-3070-49fd-b7f1-f46f0580551a in Empty state. Created a new member id consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2-af756c56-bc2c-479f-a27f-998b77422a6d and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:08,057] INFO [GroupCoordinator 1]: Preparing to rebalance group 0eeb2e6c-3070-49fd-b7f1-f46f0580551a in state PreparingRebalance with old generation 0 (__consumer_offsets-32) (reason: Adding new member consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2-af756c56-bc2c-479f-a27f-998b77422a6d with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:10,320] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:10,341] INFO [GroupCoordinator 1]: Stabilized group 5c96c918-54eb-401a-98be-aaba56deddd0 generation 1 (__consumer_offsets-37) with 1 members (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:10,341] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-22adce61-2bf5-4ed8-8fb3-c45a45f44abb for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:10,346] INFO [GroupCoordinator 1]: Assignment received from leader consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3-742e9a4e-ecbb-4557-ba8f-8319bc1a4974 for group 5c96c918-54eb-401a-98be-aaba56deddd0 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:11,058] INFO [GroupCoordinator 1]: Stabilized group 0eeb2e6c-3070-49fd-b7f1-f46f0580551a generation 1 (__consumer_offsets-32) with 1 members (kafka.coordinator.group.GroupCoordinator) 13:47:15 kafka | [2024-07-03 13:45:11,075] INFO [GroupCoordinator 1]: Assignment received from leader consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2-af756c56-bc2c-479f-a27f-998b77422a6d for group 0eeb2e6c-3070-49fd-b7f1-f46f0580551a for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 13:47:15 =================================== 13:47:15 ======== Logs from mariadb ======== 13:47:15 mariadb | 2024-07-03 13:44:23+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 13:47:15 mariadb | 2024-07-03 13:44:23+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 13:47:15 mariadb | 2024-07-03 13:44:23+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 13:47:15 mariadb | 2024-07-03 13:44:23+00:00 [Note] [Entrypoint]: Initializing database files 13:47:15 mariadb | 2024-07-03 13:44:23 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 13:47:15 mariadb | 2024-07-03 13:44:23 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 13:47:15 mariadb | 2024-07-03 13:44:24 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 13:47:15 mariadb | 13:47:15 mariadb | 13:47:15 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 13:47:15 mariadb | To do so, start the server, then issue the following command: 13:47:15 mariadb | 13:47:15 mariadb | '/usr/bin/mysql_secure_installation' 13:47:15 mariadb | 13:47:15 mariadb | which will also give you the option of removing the test 13:47:15 mariadb | databases and anonymous user created by default. This is 13:47:15 mariadb | strongly recommended for production servers. 13:47:15 mariadb | 13:47:15 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 13:47:15 mariadb | 13:47:15 mariadb | Please report any problems at https://mariadb.org/jira 13:47:15 mariadb | 13:47:15 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 13:47:15 mariadb | 13:47:15 mariadb | Consider joining MariaDB's strong and vibrant community: 13:47:15 mariadb | https://mariadb.org/get-involved/ 13:47:15 mariadb | 13:47:15 mariadb | 2024-07-03 13:44:29+00:00 [Note] [Entrypoint]: Database files initialized 13:47:15 mariadb | 2024-07-03 13:44:29+00:00 [Note] [Entrypoint]: Starting temporary server 13:47:15 mariadb | 2024-07-03 13:44:29+00:00 [Note] [Entrypoint]: Waiting for server startup 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 95 ... 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Note] InnoDB: Number of transaction pools: 1 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Note] InnoDB: Completed initialization of buffer pool 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Note] InnoDB: 128 rollback segments are active. 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Note] InnoDB: log sequence number 46456; transaction id 14 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Note] Plugin 'FEEDBACK' is disabled. 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 13:47:15 mariadb | 2024-07-03 13:44:29 0 [Note] mariadbd: ready for connections. 13:47:15 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 13:47:15 mariadb | 2024-07-03 13:44:30+00:00 [Note] [Entrypoint]: Temporary server started. 13:47:15 mariadb | 2024-07-03 13:44:33+00:00 [Note] [Entrypoint]: Creating user policy_user 13:47:15 mariadb | 2024-07-03 13:44:33+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 13:47:15 mariadb | 13:47:15 mariadb | 2024-07-03 13:44:33+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 13:47:15 mariadb | 13:47:15 mariadb | 2024-07-03 13:44:33+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 13:47:15 mariadb | #!/bin/bash -xv 13:47:15 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 13:47:15 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 13:47:15 mariadb | # 13:47:15 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 13:47:15 mariadb | # you may not use this file except in compliance with the License. 13:47:15 mariadb | # You may obtain a copy of the License at 13:47:15 mariadb | # 13:47:15 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 13:47:15 mariadb | # 13:47:15 mariadb | # Unless required by applicable law or agreed to in writing, software 13:47:15 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 13:47:15 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13:47:15 mariadb | # See the License for the specific language governing permissions and 13:47:15 mariadb | # limitations under the License. 13:47:15 mariadb | 13:47:15 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:47:15 mariadb | do 13:47:15 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 13:47:15 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 13:47:15 mariadb | done 13:47:15 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:47:15 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 13:47:15 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:47:15 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:47:15 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 13:47:15 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:47:15 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:47:15 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 13:47:15 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:47:15 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:47:15 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 13:47:15 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:47:15 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:47:15 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 13:47:15 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:47:15 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:47:15 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 13:47:15 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:47:15 mariadb | 13:47:15 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 13:47:15 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 13:47:15 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 13:47:15 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 13:47:15 mariadb | 13:47:15 mariadb | 2024-07-03 13:44:34+00:00 [Note] [Entrypoint]: Stopping temporary server 13:47:15 mariadb | 2024-07-03 13:44:34 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 13:47:15 mariadb | 2024-07-03 13:44:34 0 [Note] InnoDB: FTS optimize thread exiting. 13:47:15 mariadb | 2024-07-03 13:44:34 0 [Note] InnoDB: Starting shutdown... 13:47:15 mariadb | 2024-07-03 13:44:34 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 13:47:15 mariadb | 2024-07-03 13:44:34 0 [Note] InnoDB: Buffer pool(s) dump completed at 240703 13:44:34 13:47:15 mariadb | 2024-07-03 13:44:34 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 13:47:15 mariadb | 2024-07-03 13:44:34 0 [Note] InnoDB: Shutdown completed; log sequence number 347134; transaction id 298 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] mariadbd: Shutdown complete 13:47:15 mariadb | 13:47:15 mariadb | 2024-07-03 13:44:35+00:00 [Note] [Entrypoint]: Temporary server stopped 13:47:15 mariadb | 13:47:15 mariadb | 2024-07-03 13:44:35+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 13:47:15 mariadb | 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] InnoDB: Number of transaction pools: 1 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] InnoDB: Completed initialization of buffer pool 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] InnoDB: 128 rollback segments are active. 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] InnoDB: log sequence number 347134; transaction id 299 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] Plugin 'FEEDBACK' is disabled. 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] Server socket created on IP: '0.0.0.0'. 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] Server socket created on IP: '::'. 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] mariadbd: ready for connections. 13:47:15 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 13:47:15 mariadb | 2024-07-03 13:44:35 0 [Note] InnoDB: Buffer pool(s) load completed at 240703 13:44:35 13:47:15 mariadb | 2024-07-03 13:44:35 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 13:47:15 mariadb | 2024-07-03 13:44:35 9 [Warning] Aborted connection 9 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 13:47:15 mariadb | 2024-07-03 13:44:35 25 [Warning] Aborted connection 25 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 13:47:15 mariadb | 2024-07-03 13:44:36 35 [Warning] Aborted connection 35 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 13:47:15 =================================== 13:47:15 ======== Logs from apex-pdp ======== 13:47:15 policy-apex-pdp | Waiting for mariadb port 3306... 13:47:15 policy-apex-pdp | mariadb (172.17.0.5:3306) open 13:47:15 policy-apex-pdp | Waiting for kafka port 9092... 13:47:15 policy-apex-pdp | Waiting for pap port 6969... 13:47:15 policy-apex-pdp | kafka (172.17.0.6:9092) open 13:47:15 policy-apex-pdp | pap (172.17.0.10:6969) open 13:47:15 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.345+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.481+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:47:15 policy-apex-pdp | allow.auto.create.topics = true 13:47:15 policy-apex-pdp | auto.commit.interval.ms = 5000 13:47:15 policy-apex-pdp | auto.include.jmx.reporter = true 13:47:15 policy-apex-pdp | auto.offset.reset = latest 13:47:15 policy-apex-pdp | bootstrap.servers = [kafka:9092] 13:47:15 policy-apex-pdp | check.crcs = true 13:47:15 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 13:47:15 policy-apex-pdp | client.id = consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-1 13:47:15 policy-apex-pdp | client.rack = 13:47:15 policy-apex-pdp | connections.max.idle.ms = 540000 13:47:15 policy-apex-pdp | default.api.timeout.ms = 60000 13:47:15 policy-apex-pdp | enable.auto.commit = true 13:47:15 policy-apex-pdp | exclude.internal.topics = true 13:47:15 policy-apex-pdp | fetch.max.bytes = 52428800 13:47:15 policy-apex-pdp | fetch.max.wait.ms = 500 13:47:15 policy-apex-pdp | fetch.min.bytes = 1 13:47:15 policy-apex-pdp | group.id = 0eeb2e6c-3070-49fd-b7f1-f46f0580551a 13:47:15 policy-apex-pdp | group.instance.id = null 13:47:15 policy-apex-pdp | heartbeat.interval.ms = 3000 13:47:15 policy-apex-pdp | interceptor.classes = [] 13:47:15 policy-apex-pdp | internal.leave.group.on.close = true 13:47:15 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 13:47:15 policy-apex-pdp | isolation.level = read_uncommitted 13:47:15 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:47:15 policy-apex-pdp | max.partition.fetch.bytes = 1048576 13:47:15 policy-apex-pdp | max.poll.interval.ms = 300000 13:47:15 policy-apex-pdp | max.poll.records = 500 13:47:15 policy-apex-pdp | metadata.max.age.ms = 300000 13:47:15 policy-apex-pdp | metric.reporters = [] 13:47:15 policy-apex-pdp | metrics.num.samples = 2 13:47:15 policy-apex-pdp | metrics.recording.level = INFO 13:47:15 policy-apex-pdp | metrics.sample.window.ms = 30000 13:47:15 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:47:15 policy-apex-pdp | receive.buffer.bytes = 65536 13:47:15 policy-apex-pdp | reconnect.backoff.max.ms = 1000 13:47:15 policy-apex-pdp | reconnect.backoff.ms = 50 13:47:15 policy-apex-pdp | request.timeout.ms = 30000 13:47:15 policy-apex-pdp | retry.backoff.ms = 100 13:47:15 policy-apex-pdp | sasl.client.callback.handler.class = null 13:47:15 policy-apex-pdp | sasl.jaas.config = null 13:47:15 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:47:15 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 13:47:15 policy-apex-pdp | sasl.kerberos.service.name = null 13:47:15 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 13:47:15 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 13:47:15 policy-apex-pdp | sasl.login.callback.handler.class = null 13:47:15 policy-apex-pdp | sasl.login.class = null 13:47:15 policy-apex-pdp | sasl.login.connect.timeout.ms = null 13:47:15 policy-apex-pdp | sasl.login.read.timeout.ms = null 13:47:15 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 13:47:15 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 13:47:15 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 13:47:15 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 13:47:15 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 13:47:15 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 13:47:15 policy-apex-pdp | sasl.mechanism = GSSAPI 13:47:15 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 13:47:15 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 13:47:15 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 13:47:15 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:47:15 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:47:15 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:47:15 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 13:47:15 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 13:47:15 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 13:47:15 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 13:47:15 policy-apex-pdp | security.protocol = PLAINTEXT 13:47:15 policy-apex-pdp | security.providers = null 13:47:15 policy-apex-pdp | send.buffer.bytes = 131072 13:47:15 policy-apex-pdp | session.timeout.ms = 45000 13:47:15 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 13:47:15 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 13:47:15 policy-apex-pdp | ssl.cipher.suites = null 13:47:15 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:47:15 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 13:47:15 policy-apex-pdp | ssl.engine.factory.class = null 13:47:15 policy-apex-pdp | ssl.key.password = null 13:47:15 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 13:47:15 policy-apex-pdp | ssl.keystore.certificate.chain = null 13:47:15 policy-apex-pdp | ssl.keystore.key = null 13:47:15 policy-apex-pdp | ssl.keystore.location = null 13:47:15 policy-apex-pdp | ssl.keystore.password = null 13:47:15 policy-apex-pdp | ssl.keystore.type = JKS 13:47:15 policy-apex-pdp | ssl.protocol = TLSv1.3 13:47:15 policy-apex-pdp | ssl.provider = null 13:47:15 policy-apex-pdp | ssl.secure.random.implementation = null 13:47:15 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 13:47:15 policy-apex-pdp | ssl.truststore.certificates = null 13:47:15 policy-apex-pdp | ssl.truststore.location = null 13:47:15 policy-apex-pdp | ssl.truststore.password = null 13:47:15 policy-apex-pdp | ssl.truststore.type = JKS 13:47:15 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:47:15 policy-apex-pdp | 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.629+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.629+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.629+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720014307628 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.631+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-1, groupId=0eeb2e6c-3070-49fd-b7f1-f46f0580551a] Subscribed to topic(s): policy-pdp-pap 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.643+00:00|INFO|ServiceManager|main] service manager starting 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.643+00:00|INFO|ServiceManager|main] service manager starting topics 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.645+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0eeb2e6c-3070-49fd-b7f1-f46f0580551a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.664+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:47:15 policy-apex-pdp | allow.auto.create.topics = true 13:47:15 policy-apex-pdp | auto.commit.interval.ms = 5000 13:47:15 policy-apex-pdp | auto.include.jmx.reporter = true 13:47:15 policy-apex-pdp | auto.offset.reset = latest 13:47:15 policy-apex-pdp | bootstrap.servers = [kafka:9092] 13:47:15 policy-apex-pdp | check.crcs = true 13:47:15 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 13:47:15 policy-apex-pdp | client.id = consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2 13:47:15 policy-apex-pdp | client.rack = 13:47:15 policy-apex-pdp | connections.max.idle.ms = 540000 13:47:15 policy-apex-pdp | default.api.timeout.ms = 60000 13:47:15 policy-apex-pdp | enable.auto.commit = true 13:47:15 policy-apex-pdp | exclude.internal.topics = true 13:47:15 policy-apex-pdp | fetch.max.bytes = 52428800 13:47:15 policy-apex-pdp | fetch.max.wait.ms = 500 13:47:15 policy-apex-pdp | fetch.min.bytes = 1 13:47:15 policy-apex-pdp | group.id = 0eeb2e6c-3070-49fd-b7f1-f46f0580551a 13:47:15 policy-apex-pdp | group.instance.id = null 13:47:15 policy-apex-pdp | heartbeat.interval.ms = 3000 13:47:15 policy-apex-pdp | interceptor.classes = [] 13:47:15 policy-apex-pdp | internal.leave.group.on.close = true 13:47:15 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 13:47:15 policy-apex-pdp | isolation.level = read_uncommitted 13:47:15 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:47:15 policy-apex-pdp | max.partition.fetch.bytes = 1048576 13:47:15 policy-apex-pdp | max.poll.interval.ms = 300000 13:47:15 policy-apex-pdp | max.poll.records = 500 13:47:15 policy-apex-pdp | metadata.max.age.ms = 300000 13:47:15 policy-apex-pdp | metric.reporters = [] 13:47:15 policy-apex-pdp | metrics.num.samples = 2 13:47:15 policy-apex-pdp | metrics.recording.level = INFO 13:47:15 policy-apex-pdp | metrics.sample.window.ms = 30000 13:47:15 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:47:15 policy-apex-pdp | receive.buffer.bytes = 65536 13:47:15 policy-apex-pdp | reconnect.backoff.max.ms = 1000 13:47:15 policy-apex-pdp | reconnect.backoff.ms = 50 13:47:15 policy-apex-pdp | request.timeout.ms = 30000 13:47:15 policy-apex-pdp | retry.backoff.ms = 100 13:47:15 policy-apex-pdp | sasl.client.callback.handler.class = null 13:47:15 policy-apex-pdp | sasl.jaas.config = null 13:47:15 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:47:15 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 13:47:15 policy-apex-pdp | sasl.kerberos.service.name = null 13:47:15 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 13:47:15 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 13:47:15 policy-apex-pdp | sasl.login.callback.handler.class = null 13:47:15 policy-apex-pdp | sasl.login.class = null 13:47:15 policy-apex-pdp | sasl.login.connect.timeout.ms = null 13:47:15 policy-apex-pdp | sasl.login.read.timeout.ms = null 13:47:15 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 13:47:15 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 13:47:15 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 13:47:15 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 13:47:15 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 13:47:15 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 13:47:15 policy-apex-pdp | sasl.mechanism = GSSAPI 13:47:15 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 13:47:15 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 13:47:15 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 13:47:15 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:47:15 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:47:15 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:47:15 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 13:47:15 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 13:47:15 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 13:47:15 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 13:47:15 policy-apex-pdp | security.protocol = PLAINTEXT 13:47:15 policy-apex-pdp | security.providers = null 13:47:15 policy-apex-pdp | send.buffer.bytes = 131072 13:47:15 policy-apex-pdp | session.timeout.ms = 45000 13:47:15 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 13:47:15 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 13:47:15 policy-apex-pdp | ssl.cipher.suites = null 13:47:15 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:47:15 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 13:47:15 policy-apex-pdp | ssl.engine.factory.class = null 13:47:15 policy-apex-pdp | ssl.key.password = null 13:47:15 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 13:47:15 policy-apex-pdp | ssl.keystore.certificate.chain = null 13:47:15 policy-apex-pdp | ssl.keystore.key = null 13:47:15 policy-apex-pdp | ssl.keystore.location = null 13:47:15 policy-apex-pdp | ssl.keystore.password = null 13:47:15 policy-apex-pdp | ssl.keystore.type = JKS 13:47:15 policy-apex-pdp | ssl.protocol = TLSv1.3 13:47:15 policy-apex-pdp | ssl.provider = null 13:47:15 policy-apex-pdp | ssl.secure.random.implementation = null 13:47:15 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 13:47:15 policy-apex-pdp | ssl.truststore.certificates = null 13:47:15 policy-apex-pdp | ssl.truststore.location = null 13:47:15 policy-apex-pdp | ssl.truststore.password = null 13:47:15 policy-apex-pdp | ssl.truststore.type = JKS 13:47:15 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:47:15 policy-apex-pdp | 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.672+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.672+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.672+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720014307672 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.672+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2, groupId=0eeb2e6c-3070-49fd-b7f1-f46f0580551a] Subscribed to topic(s): policy-pdp-pap 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.673+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b48b188a-064c-4182-b936-3b628f1d4f5d, alive=false, publisher=null]]: starting 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.684+00:00|INFO|ProducerConfig|main] ProducerConfig values: 13:47:15 policy-apex-pdp | acks = -1 13:47:15 policy-apex-pdp | auto.include.jmx.reporter = true 13:47:15 policy-apex-pdp | batch.size = 16384 13:47:15 policy-apex-pdp | bootstrap.servers = [kafka:9092] 13:47:15 policy-apex-pdp | buffer.memory = 33554432 13:47:15 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 13:47:15 policy-apex-pdp | client.id = producer-1 13:47:15 policy-apex-pdp | compression.type = none 13:47:15 policy-apex-pdp | connections.max.idle.ms = 540000 13:47:15 policy-apex-pdp | delivery.timeout.ms = 120000 13:47:15 policy-apex-pdp | enable.idempotence = true 13:47:15 policy-apex-pdp | interceptor.classes = [] 13:47:15 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:47:15 policy-apex-pdp | linger.ms = 0 13:47:15 policy-apex-pdp | max.block.ms = 60000 13:47:15 policy-apex-pdp | max.in.flight.requests.per.connection = 5 13:47:15 policy-apex-pdp | max.request.size = 1048576 13:47:15 policy-apex-pdp | metadata.max.age.ms = 300000 13:47:15 policy-apex-pdp | metadata.max.idle.ms = 300000 13:47:15 policy-apex-pdp | metric.reporters = [] 13:47:15 policy-apex-pdp | metrics.num.samples = 2 13:47:15 policy-apex-pdp | metrics.recording.level = INFO 13:47:15 policy-apex-pdp | metrics.sample.window.ms = 30000 13:47:15 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 13:47:15 policy-apex-pdp | partitioner.availability.timeout.ms = 0 13:47:15 policy-apex-pdp | partitioner.class = null 13:47:15 policy-apex-pdp | partitioner.ignore.keys = false 13:47:15 policy-apex-pdp | receive.buffer.bytes = 32768 13:47:15 policy-apex-pdp | reconnect.backoff.max.ms = 1000 13:47:15 policy-apex-pdp | reconnect.backoff.ms = 50 13:47:15 policy-apex-pdp | request.timeout.ms = 30000 13:47:15 policy-apex-pdp | retries = 2147483647 13:47:15 policy-apex-pdp | retry.backoff.ms = 100 13:47:15 policy-apex-pdp | sasl.client.callback.handler.class = null 13:47:15 policy-apex-pdp | sasl.jaas.config = null 13:47:15 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:47:15 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 13:47:15 policy-apex-pdp | sasl.kerberos.service.name = null 13:47:15 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 13:47:15 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 13:47:15 policy-apex-pdp | sasl.login.callback.handler.class = null 13:47:15 policy-apex-pdp | sasl.login.class = null 13:47:15 policy-apex-pdp | sasl.login.connect.timeout.ms = null 13:47:15 policy-apex-pdp | sasl.login.read.timeout.ms = null 13:47:15 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 13:47:15 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 13:47:15 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 13:47:15 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 13:47:15 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 13:47:15 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 13:47:15 policy-apex-pdp | sasl.mechanism = GSSAPI 13:47:15 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 13:47:15 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 13:47:15 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 13:47:15 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:47:15 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:47:15 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:47:15 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 13:47:15 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 13:47:15 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 13:47:15 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 13:47:15 policy-apex-pdp | security.protocol = PLAINTEXT 13:47:15 policy-apex-pdp | security.providers = null 13:47:15 policy-apex-pdp | send.buffer.bytes = 131072 13:47:15 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 13:47:15 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 13:47:15 policy-apex-pdp | ssl.cipher.suites = null 13:47:15 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:47:15 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 13:47:15 policy-apex-pdp | ssl.engine.factory.class = null 13:47:15 policy-apex-pdp | ssl.key.password = null 13:47:15 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 13:47:15 policy-apex-pdp | ssl.keystore.certificate.chain = null 13:47:15 policy-apex-pdp | ssl.keystore.key = null 13:47:15 policy-apex-pdp | ssl.keystore.location = null 13:47:15 policy-apex-pdp | ssl.keystore.password = null 13:47:15 policy-apex-pdp | ssl.keystore.type = JKS 13:47:15 policy-apex-pdp | ssl.protocol = TLSv1.3 13:47:15 policy-apex-pdp | ssl.provider = null 13:47:15 policy-apex-pdp | ssl.secure.random.implementation = null 13:47:15 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 13:47:15 policy-apex-pdp | ssl.truststore.certificates = null 13:47:15 policy-apex-pdp | ssl.truststore.location = null 13:47:15 policy-apex-pdp | ssl.truststore.password = null 13:47:15 policy-apex-pdp | ssl.truststore.type = JKS 13:47:15 policy-apex-pdp | transaction.timeout.ms = 60000 13:47:15 policy-apex-pdp | transactional.id = null 13:47:15 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:47:15 policy-apex-pdp | 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.693+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.710+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.710+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.710+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720014307710 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.711+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b48b188a-064c-4182-b936-3b628f1d4f5d, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.711+00:00|INFO|ServiceManager|main] service manager starting set alive 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.711+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.713+00:00|INFO|ServiceManager|main] service manager starting topic sinks 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.713+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.715+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.715+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.715+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.715+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0eeb2e6c-3070-49fd-b7f1-f46f0580551a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.716+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0eeb2e6c-3070-49fd-b7f1-f46f0580551a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.716+00:00|INFO|ServiceManager|main] service manager starting Create REST server 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.729+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 13:47:15 policy-apex-pdp | [] 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.731+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 13:47:15 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ca92a3a2-f731-4051-a9f1-05e4fe924973","timestampMs":1720014307715,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup"} 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.876+00:00|INFO|ServiceManager|main] service manager starting Rest Server 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.876+00:00|INFO|ServiceManager|main] service manager starting 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.876+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.876+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.887+00:00|INFO|ServiceManager|main] service manager started 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.887+00:00|INFO|ServiceManager|main] service manager started 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.887+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 13:47:15 policy-apex-pdp | [2024-07-03T13:45:07.887+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:47:15 policy-apex-pdp | [2024-07-03T13:45:08.026+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2, groupId=0eeb2e6c-3070-49fd-b7f1-f46f0580551a] Cluster ID: ZzyuOxDrRguYnYKgclDwYw 13:47:15 policy-apex-pdp | [2024-07-03T13:45:08.026+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: ZzyuOxDrRguYnYKgclDwYw 13:47:15 policy-apex-pdp | [2024-07-03T13:45:08.027+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 13:47:15 policy-apex-pdp | [2024-07-03T13:45:08.027+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2, groupId=0eeb2e6c-3070-49fd-b7f1-f46f0580551a] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 13:47:15 policy-apex-pdp | [2024-07-03T13:45:08.038+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2, groupId=0eeb2e6c-3070-49fd-b7f1-f46f0580551a] (Re-)joining group 13:47:15 policy-apex-pdp | [2024-07-03T13:45:08.055+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2, groupId=0eeb2e6c-3070-49fd-b7f1-f46f0580551a] Request joining group due to: need to re-join with the given member-id: consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2-af756c56-bc2c-479f-a27f-998b77422a6d 13:47:15 policy-apex-pdp | [2024-07-03T13:45:08.055+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2, groupId=0eeb2e6c-3070-49fd-b7f1-f46f0580551a] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 13:47:15 policy-apex-pdp | [2024-07-03T13:45:08.055+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2, groupId=0eeb2e6c-3070-49fd-b7f1-f46f0580551a] (Re-)joining group 13:47:15 policy-apex-pdp | [2024-07-03T13:45:08.469+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 13:47:15 policy-apex-pdp | [2024-07-03T13:45:08.471+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 13:47:15 policy-apex-pdp | [2024-07-03T13:45:09.178+00:00|INFO|RequestLog|qtp739264372-32] 172.17.0.1 - - [03/Jul/2024:13:45:09 +0000] "GET / HTTP/1.1" 401 495 "-" "curl/7.58.0" 13:47:15 policy-apex-pdp | [2024-07-03T13:45:11.061+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2, groupId=0eeb2e6c-3070-49fd-b7f1-f46f0580551a] Successfully joined group with generation Generation{generationId=1, memberId='consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2-af756c56-bc2c-479f-a27f-998b77422a6d', protocol='range'} 13:47:15 policy-apex-pdp | [2024-07-03T13:45:11.071+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2, groupId=0eeb2e6c-3070-49fd-b7f1-f46f0580551a] Finished assignment for group at generation 1: {consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2-af756c56-bc2c-479f-a27f-998b77422a6d=Assignment(partitions=[policy-pdp-pap-0])} 13:47:15 policy-apex-pdp | [2024-07-03T13:45:11.080+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2, groupId=0eeb2e6c-3070-49fd-b7f1-f46f0580551a] Successfully synced group in generation Generation{generationId=1, memberId='consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2-af756c56-bc2c-479f-a27f-998b77422a6d', protocol='range'} 13:47:15 policy-apex-pdp | [2024-07-03T13:45:11.080+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2, groupId=0eeb2e6c-3070-49fd-b7f1-f46f0580551a] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 13:47:15 policy-apex-pdp | [2024-07-03T13:45:11.083+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2, groupId=0eeb2e6c-3070-49fd-b7f1-f46f0580551a] Adding newly assigned partitions: policy-pdp-pap-0 13:47:15 policy-apex-pdp | [2024-07-03T13:45:11.090+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2, groupId=0eeb2e6c-3070-49fd-b7f1-f46f0580551a] Found no committed offset for partition policy-pdp-pap-0 13:47:15 policy-apex-pdp | [2024-07-03T13:45:11.102+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0eeb2e6c-3070-49fd-b7f1-f46f0580551a-2, groupId=0eeb2e6c-3070-49fd-b7f1-f46f0580551a] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 13:47:15 policy-apex-pdp | [2024-07-03T13:45:27.716+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 13:47:15 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c6b3afb6-5983-4d84-968c-c30ffc9c944c","timestampMs":1720014327716,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup"} 13:47:15 policy-apex-pdp | [2024-07-03T13:45:27.740+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:47:15 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c6b3afb6-5983-4d84-968c-c30ffc9c944c","timestampMs":1720014327716,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup"} 13:47:15 policy-apex-pdp | [2024-07-03T13:45:27.742+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 13:47:15 policy-apex-pdp | [2024-07-03T13:45:27.978+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:47:15 policy-apex-pdp | {"source":"pap-9555d4a6-4530-4d70-9a52-ca28b9fb7cb3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"744fd4d8-3fd6-4e55-83d0-ceced1eaca18","timestampMs":1720014327889,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-apex-pdp | [2024-07-03T13:45:27.987+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 13:47:15 policy-apex-pdp | [2024-07-03T13:45:27.987+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 13:47:15 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"13b5dd4f-ae45-4e1b-a9c6-030fb5b6080f","timestampMs":1720014327987,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup"} 13:47:15 policy-apex-pdp | [2024-07-03T13:45:27.988+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 13:47:15 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"744fd4d8-3fd6-4e55-83d0-ceced1eaca18","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"b5c5a65d-ae64-44a4-b17e-60cdf59c8af9","timestampMs":1720014327988,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-apex-pdp | [2024-07-03T13:45:28.005+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:47:15 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"13b5dd4f-ae45-4e1b-a9c6-030fb5b6080f","timestampMs":1720014327987,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup"} 13:47:15 policy-apex-pdp | [2024-07-03T13:45:28.005+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 13:47:15 policy-apex-pdp | [2024-07-03T13:45:28.008+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:47:15 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"744fd4d8-3fd6-4e55-83d0-ceced1eaca18","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"b5c5a65d-ae64-44a4-b17e-60cdf59c8af9","timestampMs":1720014327988,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-apex-pdp | [2024-07-03T13:45:28.008+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 13:47:15 policy-apex-pdp | [2024-07-03T13:45:28.043+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:47:15 policy-apex-pdp | {"source":"pap-9555d4a6-4530-4d70-9a52-ca28b9fb7cb3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"5cdb284f-b3fe-4dbc-9e57-bb99f28b3e9d","timestampMs":1720014327890,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-apex-pdp | [2024-07-03T13:45:28.045+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 13:47:15 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"5cdb284f-b3fe-4dbc-9e57-bb99f28b3e9d","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"f429decc-52d3-4d6c-b2ff-6e8180242b6d","timestampMs":1720014328045,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-apex-pdp | [2024-07-03T13:45:28.053+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:47:15 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"5cdb284f-b3fe-4dbc-9e57-bb99f28b3e9d","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"f429decc-52d3-4d6c-b2ff-6e8180242b6d","timestampMs":1720014328045,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-apex-pdp | [2024-07-03T13:45:28.053+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 13:47:15 policy-apex-pdp | [2024-07-03T13:45:28.075+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:47:15 policy-apex-pdp | {"source":"pap-9555d4a6-4530-4d70-9a52-ca28b9fb7cb3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a00cdd27-8378-4ad8-915a-c211ea2a58e0","timestampMs":1720014328057,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-apex-pdp | [2024-07-03T13:45:28.077+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 13:47:15 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a00cdd27-8378-4ad8-915a-c211ea2a58e0","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"a3dce925-a4cd-4f29-a70a-e07aabbfae63","timestampMs":1720014328077,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-apex-pdp | [2024-07-03T13:45:28.083+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:47:15 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a00cdd27-8378-4ad8-915a-c211ea2a58e0","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"a3dce925-a4cd-4f29-a70a-e07aabbfae63","timestampMs":1720014328077,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-apex-pdp | [2024-07-03T13:45:28.084+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 13:47:15 policy-apex-pdp | [2024-07-03T13:45:29.235+00:00|INFO|RequestLog|qtp739264372-28] 172.17.0.1 - policyadmin [03/Jul/2024:13:45:29 +0000] "GET /policy/apex-pdp/v1/healthcheck HTTP/1.1" 200 109 "-" "curl/7.58.0" 13:47:15 policy-apex-pdp | [2024-07-03T13:45:56.092+00:00|INFO|RequestLog|qtp739264372-29] 172.17.0.3 - policyadmin [03/Jul/2024:13:45:56 +0000] "GET /metrics HTTP/1.1" 200 10652 "-" "Prometheus/2.53.0" 13:47:15 policy-apex-pdp | [2024-07-03T13:46:06.494+00:00|INFO|RequestLog|qtp739264372-26] 172.17.0.8 - policyadmin [03/Jul/2024:13:46:06 +0000] "GET /policy/apex-pdp/v1/healthcheck?null HTTP/1.1" 200 109 "-" "python-requests/2.32.3" 13:47:15 policy-apex-pdp | [2024-07-03T13:46:08.168+00:00|INFO|RequestLog|qtp739264372-27] 172.17.0.8 - policyadmin [03/Jul/2024:13:46:08 +0000] "GET /metrics?null HTTP/1.1" 200 11009 "-" "python-requests/2.32.3" 13:47:15 policy-apex-pdp | [2024-07-03T13:46:08.193+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.8 - policyadmin [03/Jul/2024:13:46:08 +0000] "GET /policy/apex-pdp/v1/healthcheck?null HTTP/1.1" 200 109 "-" "python-requests/2.32.3" 13:47:15 policy-apex-pdp | [2024-07-03T13:46:56.083+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.3 - policyadmin [03/Jul/2024:13:46:56 +0000] "GET /metrics HTTP/1.1" 200 10651 "-" "Prometheus/2.53.0" 13:47:15 =================================== 13:47:15 ======== Logs from api ======== 13:47:15 policy-api | Waiting for mariadb port 3306... 13:47:15 policy-api | mariadb (172.17.0.5:3306) open 13:47:15 policy-api | Waiting for policy-db-migrator port 6824... 13:47:15 policy-api | policy-db-migrator (172.17.0.8:6824) open 13:47:15 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 13:47:15 policy-api | 13:47:15 policy-api | . ____ _ __ _ _ 13:47:15 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 13:47:15 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 13:47:15 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 13:47:15 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 13:47:15 policy-api | =========|_|==============|___/=/_/_/_/ 13:47:15 policy-api | :: Spring Boot :: (v3.1.10) 13:47:15 policy-api | 13:47:15 policy-api | [2024-07-03T13:44:45.337+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 13:47:15 policy-api | [2024-07-03T13:44:45.396+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 28 (/app/api.jar started by policy in /opt/app/policy/api/bin) 13:47:15 policy-api | [2024-07-03T13:44:45.398+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 13:47:15 policy-api | [2024-07-03T13:44:47.293+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 13:47:15 policy-api | [2024-07-03T13:44:47.508+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 205 ms. Found 6 JPA repository interfaces. 13:47:15 policy-api | [2024-07-03T13:44:48.415+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 13:47:15 policy-api | [2024-07-03T13:44:48.427+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 13:47:15 policy-api | [2024-07-03T13:44:48.429+00:00|INFO|StandardService|main] Starting service [Tomcat] 13:47:15 policy-api | [2024-07-03T13:44:48.429+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 13:47:15 policy-api | [2024-07-03T13:44:48.528+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 13:47:15 policy-api | [2024-07-03T13:44:48.529+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3069 ms 13:47:15 policy-api | [2024-07-03T13:44:48.861+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 13:47:15 policy-api | [2024-07-03T13:44:48.931+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final 13:47:15 policy-api | [2024-07-03T13:44:48.969+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 13:47:15 policy-api | [2024-07-03T13:44:49.219+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 13:47:15 policy-api | [2024-07-03T13:44:49.250+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 13:47:15 policy-api | [2024-07-03T13:44:49.344+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@49122b8f 13:47:15 policy-api | [2024-07-03T13:44:49.346+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 13:47:15 policy-api | [2024-07-03T13:44:51.282+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 13:47:15 policy-api | [2024-07-03T13:44:51.286+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 13:47:15 policy-api | [2024-07-03T13:44:52.092+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 13:47:15 policy-api | [2024-07-03T13:44:52.912+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 13:47:15 policy-api | [2024-07-03T13:44:53.883+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 13:47:15 policy-api | [2024-07-03T13:44:54.054+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@4dafea3f, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@531245fe, org.springframework.security.web.context.SecurityContextHolderFilter@6b6d68d2, org.springframework.security.web.header.HeaderWriterFilter@64e06cf0, org.springframework.security.web.authentication.logout.LogoutFilter@68b2ce71, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@71ebd1d9, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@912fdbb, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@3d62648d, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6d7298be, org.springframework.security.web.access.ExceptionTranslationFilter@321f97d9, org.springframework.security.web.access.intercept.AuthorizationFilter@5ac88d71] 13:47:15 policy-api | [2024-07-03T13:44:54.733+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 13:47:15 policy-api | [2024-07-03T13:44:54.809+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 13:47:15 policy-api | [2024-07-03T13:44:54.824+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 13:47:15 policy-api | [2024-07-03T13:44:54.846+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.187 seconds (process running for 10.811) 13:47:15 policy-api | [2024-07-03T13:45:39.926+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 13:47:15 policy-api | [2024-07-03T13:45:39.927+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 13:47:15 policy-api | [2024-07-03T13:45:39.929+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 2 ms 13:47:15 policy-api | [2024-07-03T13:46:06.678+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 13:47:15 policy-api | [] 13:47:15 =================================== 13:47:15 ======== Logs from csit-tests ======== 13:47:15 policy-csit | Invoking the robot tests from: apex-pdp-test.robot apex-slas.robot 13:47:15 policy-csit | Run Robot test 13:47:15 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 13:47:15 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 13:47:15 policy-csit | -v POLICY_API_IP:policy-api:6969 13:47:15 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 13:47:15 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 13:47:15 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 13:47:15 policy-csit | -v APEX_IP:policy-apex-pdp:6969 13:47:15 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 13:47:15 policy-csit | -v KAFKA_IP:kafka:9092 13:47:15 policy-csit | -v PROMETHEUS_IP:prometheus:9090 13:47:15 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 13:47:15 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 13:47:15 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 13:47:15 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 13:47:15 policy-csit | -v TEMP_FOLDER:/tmp/distribution 13:47:15 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 13:47:15 policy-csit | -v TEST_ENV:docker 13:47:15 policy-csit | -v JAEGER_IP:jaeger:16686 13:47:15 policy-csit | Starting Robot test suites ... 13:47:15 policy-csit | ============================================================================== 13:47:15 policy-csit | Apex-Pdp-Test & Apex-Slas 13:47:15 policy-csit | ============================================================================== 13:47:15 policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Pdp-Test 13:47:15 policy-csit | ============================================================================== 13:47:15 policy-csit | Healthcheck :: Runs Apex PDP Health check | PASS | 13:47:15 policy-csit | ------------------------------------------------------------------------------ 13:47:15 policy-csit | ExecuteApexSampleDomainPolicy | FAIL | 13:47:15 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 13:47:15 policy-csit | ------------------------------------------------------------------------------ 13:47:15 policy-csit | ExecuteApexTestPnfPolicy | FAIL | 13:47:15 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 13:47:15 policy-csit | ------------------------------------------------------------------------------ 13:47:15 policy-csit | ExecuteApexTestPnfPolicyWithMetadataSet | FAIL | 13:47:15 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 13:47:15 policy-csit | ------------------------------------------------------------------------------ 13:47:15 policy-csit | Metrics :: Verify policy-apex-pdp is exporting prometheus metrics | FAIL | 13:47:15 policy-csit | '# HELP jvm_classes_currently_loaded The number of classes that are currently loaded in the JVM 13:47:15 policy-csit | # TYPE jvm_classes_currently_loaded gauge 13:47:15 policy-csit | jvm_classes_currently_loaded 7527.0 13:47:15 policy-csit | # HELP jvm_classes_loaded_total The total number of classes that have been loaded since the JVM has started execution 13:47:15 policy-csit | # TYPE jvm_classes_loaded_total counter 13:47:15 policy-csit | jvm_classes_loaded_total 7527.0 13:47:15 policy-csit | # HELP jvm_classes_unloaded_total The total number of classes that have been unloaded since the JVM has started execution 13:47:15 policy-csit | # TYPE jvm_classes_unloaded_total counter 13:47:15 policy-csit | jvm_classes_unloaded_total 0.0 13:47:15 policy-csit | # HELP process_cpu_seconds_total Total user and system CPU time spent in seconds. 13:47:15 policy-csit | # TYPE process_cpu_seconds_total counter 13:47:15 policy-csit | process_cpu_seconds_total 7.53 13:47:15 policy-csit | # HELP process_start_time_seconds Start time of the process since unix epoch in seconds. 13:47:15 policy-csit | # TYPE process_start_time_seconds gauge 13:47:15 policy-csit | process_start_time_seconds 1.720014306687E9 13:47:15 policy-csit | [ Message content over the limit has been removed. ] 13:47:15 policy-csit | # TYPE pdpa_policy_deployments_total counter 13:47:15 policy-csit | # HELP jvm_memory_pool_allocated_bytes_created Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. 13:47:15 policy-csit | # TYPE jvm_memory_pool_allocated_bytes_created gauge 13:47:15 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'profiled nmethods'",} 1.720014307951E9 13:47:15 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Old Gen",} 1.720014307971E9 13:47:15 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Eden Space",} 1.720014307971E9 13:47:15 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-profiled nmethods'",} 1.720014307971E9 13:47:15 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Survivor Space",} 1.720014307971E9 13:47:15 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="Compressed Class Space",} 1.720014307971E9 13:47:15 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="Metaspace",} 1.720014307971E9 13:47:15 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-nmethods'",} 1.720014307971E9 13:47:15 policy-csit | ' does not contain 'pdpa_policy_deployments_total{operation="deploy",status="TOTAL",} 3.0' 13:47:15 policy-csit | ------------------------------------------------------------------------------ 13:47:15 policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Pdp-Test | FAIL | 13:47:15 policy-csit | 5 tests, 1 passed, 4 failed 13:47:15 policy-csit | ============================================================================== 13:47:15 policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Slas 13:47:15 policy-csit | ============================================================================== 13:47:15 policy-csit | Healthcheck :: Runs Apex PDP Health check | PASS | 13:47:15 policy-csit | ------------------------------------------------------------------------------ 13:47:15 policy-csit | ValidatePolicyExecutionAndEventRateLowComplexity :: Validate that ... | FAIL | 13:47:15 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 13:47:15 policy-csit | ------------------------------------------------------------------------------ 13:47:15 policy-csit | ValidatePolicyExecutionAndEventRateModerateComplexity :: Validate ... | FAIL | 13:47:15 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 13:47:15 policy-csit | ------------------------------------------------------------------------------ 13:47:15 policy-csit | ValidatePolicyExecutionAndEventRateHighComplexity :: Validate that... | FAIL | 13:47:15 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 13:47:15 policy-csit | ------------------------------------------------------------------------------ 13:47:15 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 13:47:15 policy-csit | ------------------------------------------------------------------------------ 13:47:15 policy-csit | ValidatePolicyExecutionTimes :: Validate policy execution times us... | FAIL | 13:47:15 policy-csit | Resolving variable '${resp['data']['result'][0]['value'][1]}' failed: IndexError: list index out of range 13:47:15 policy-csit | ------------------------------------------------------------------------------ 13:47:15 policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Slas | FAIL | 13:47:15 policy-csit | 6 tests, 2 passed, 4 failed 13:47:15 policy-csit | ============================================================================== 13:47:15 policy-csit | Apex-Pdp-Test & Apex-Slas | FAIL | 13:47:15 policy-csit | 11 tests, 3 passed, 8 failed 13:47:15 policy-csit | ============================================================================== 13:47:15 policy-csit | Output: /tmp/results/output.xml 13:47:15 policy-csit | Log: /tmp/results/log.html 13:47:15 policy-csit | Report: /tmp/results/report.html 13:47:15 policy-csit | RESULT: 8 13:47:15 =================================== 13:47:15 ======== Logs from policy-db-migrator ======== 13:47:15 policy-db-migrator | Waiting for mariadb port 3306... 13:47:15 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 13:47:15 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 13:47:15 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 13:47:15 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 13:47:15 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 13:47:15 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 13:47:15 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 13:47:15 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 13:47:15 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 13:47:15 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 13:47:15 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 13:47:15 policy-db-migrator | Connection to mariadb (172.17.0.5) 3306 port [tcp/mysql] succeeded! 13:47:15 policy-db-migrator | 321 blocks 13:47:15 policy-db-migrator | Preparing upgrade release version: 0800 13:47:15 policy-db-migrator | Preparing upgrade release version: 0900 13:47:15 policy-db-migrator | Preparing upgrade release version: 1000 13:47:15 policy-db-migrator | Preparing upgrade release version: 1100 13:47:15 policy-db-migrator | Preparing upgrade release version: 1200 13:47:15 policy-db-migrator | Preparing upgrade release version: 1300 13:47:15 policy-db-migrator | Done 13:47:15 policy-db-migrator | name version 13:47:15 policy-db-migrator | policyadmin 0 13:47:15 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 13:47:15 policy-db-migrator | upgrade: 0 -> 1300 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0450-pdpgroup.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0470-pdp.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0570-toscadatatype.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0630-toscanodetype.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0660-toscaparameter.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0670-toscapolicies.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0690-toscapolicy.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0730-toscaproperty.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0770-toscarequirement.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0780-toscarequirements.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0820-toscatrigger.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0100-pdp.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 13:47:15 policy-db-migrator | JOIN pdpstatistics b 13:47:15 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 13:47:15 policy-db-migrator | SET a.id = b.id 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0210-sequence.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0220-sequence.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0120-toscatrigger.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0140-toscaparameter.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0150-toscaproperty.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0100-upgrade.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | select 'upgrade to 1100 completed' as msg 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | msg 13:47:15 policy-db-migrator | upgrade to 1100 completed 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0120-audit_sequence.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | TRUNCATE TABLE sequence 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | DROP TABLE pdpstatistics 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | DROP TABLE statistics_sequence 13:47:15 policy-db-migrator | -------------- 13:47:15 policy-db-migrator | 13:47:15 policy-db-migrator | policyadmin: OK: upgrade (1300) 13:47:15 policy-db-migrator | name version 13:47:15 policy-db-migrator | policyadmin 1300 13:47:15 policy-db-migrator | ID script operation from_version to_version tag success atTime 13:47:15 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:35 13:47:15 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:35 13:47:15 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:35 13:47:15 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:35 13:47:15 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:36 13:47:15 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:37 13:47:15 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:38 13:47:15 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:38 13:47:15 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:38 13:47:15 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:38 13:47:15 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:38 13:47:15 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:38 13:47:15 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:38 13:47:15 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:38 13:47:15 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:38 13:47:15 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:38 13:47:15 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:38 13:47:15 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:38 13:47:15 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:38 13:47:15 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:38 13:47:15 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:38 13:47:15 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:39 13:47:15 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:40 13:47:15 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:40 13:47:15 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:40 13:47:15 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:40 13:47:15 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:40 13:47:15 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:40 13:47:15 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:40 13:47:15 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:40 13:47:15 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:40 13:47:15 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:40 13:47:15 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:40 13:47:15 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:40 13:47:15 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0307241344350800u 1 2024-07-03 13:44:41 13:47:15 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 0307241344350900u 1 2024-07-03 13:44:41 13:47:15 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 0307241344350900u 1 2024-07-03 13:44:41 13:47:15 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 0307241344350900u 1 2024-07-03 13:44:41 13:47:15 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 0307241344350900u 1 2024-07-03 13:44:41 13:47:15 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 0307241344350900u 1 2024-07-03 13:44:41 13:47:15 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 0307241344350900u 1 2024-07-03 13:44:41 13:47:15 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0307241344350900u 1 2024-07-03 13:44:41 13:47:15 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0307241344350900u 1 2024-07-03 13:44:41 13:47:15 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0307241344350900u 1 2024-07-03 13:44:41 13:47:15 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 0307241344350900u 1 2024-07-03 13:44:41 13:47:15 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 0307241344350900u 1 2024-07-03 13:44:41 13:47:15 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 0307241344350900u 1 2024-07-03 13:44:41 13:47:15 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 0307241344350900u 1 2024-07-03 13:44:41 13:47:15 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 0307241344351000u 1 2024-07-03 13:44:41 13:47:15 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 0307241344351000u 1 2024-07-03 13:44:41 13:47:15 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 0307241344351000u 1 2024-07-03 13:44:42 13:47:15 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 0307241344351000u 1 2024-07-03 13:44:42 13:47:15 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 0307241344351000u 1 2024-07-03 13:44:42 13:47:15 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 0307241344351000u 1 2024-07-03 13:44:42 13:47:15 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 0307241344351000u 1 2024-07-03 13:44:42 13:47:15 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 0307241344351000u 1 2024-07-03 13:44:42 13:47:15 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 0307241344351000u 1 2024-07-03 13:44:42 13:47:15 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 0307241344351100u 1 2024-07-03 13:44:42 13:47:15 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 0307241344351200u 1 2024-07-03 13:44:42 13:47:15 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 0307241344351200u 1 2024-07-03 13:44:42 13:47:15 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 0307241344351200u 1 2024-07-03 13:44:42 13:47:15 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 0307241344351200u 1 2024-07-03 13:44:42 13:47:15 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 0307241344351300u 1 2024-07-03 13:44:42 13:47:15 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 0307241344351300u 1 2024-07-03 13:44:42 13:47:15 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 0307241344351300u 1 2024-07-03 13:44:42 13:47:15 policy-db-migrator | policyadmin: OK @ 1300 13:47:15 =================================== 13:47:15 ======== Logs from pap ======== 13:47:15 policy-pap | Waiting for mariadb port 3306... 13:47:15 policy-pap | mariadb (172.17.0.5:3306) open 13:47:15 policy-pap | Waiting for kafka port 9092... 13:47:15 policy-pap | kafka (172.17.0.6:9092) open 13:47:15 policy-pap | Waiting for api port 6969... 13:47:15 policy-pap | api (172.17.0.9:6969) open 13:47:15 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 13:47:15 policy-pap | PDP group configuration file: PapDb.json 13:47:15 policy-pap | 13:47:15 policy-pap | . ____ _ __ _ _ 13:47:15 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 13:47:15 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 13:47:15 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 13:47:15 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 13:47:15 policy-pap | =========|_|==============|___/=/_/_/_/ 13:47:15 policy-pap | :: Spring Boot :: (v3.1.10) 13:47:15 policy-pap | 13:47:15 policy-pap | [2024-07-03T13:44:56.807+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 13:47:15 policy-pap | [2024-07-03T13:44:56.886+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 39 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 13:47:15 policy-pap | [2024-07-03T13:44:56.887+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 13:47:15 policy-pap | [2024-07-03T13:44:58.860+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 13:47:15 policy-pap | [2024-07-03T13:44:58.955+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 85 ms. Found 7 JPA repository interfaces. 13:47:15 policy-pap | [2024-07-03T13:44:59.417+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 13:47:15 policy-pap | [2024-07-03T13:44:59.417+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 13:47:15 policy-pap | [2024-07-03T13:45:00.054+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 13:47:15 policy-pap | [2024-07-03T13:45:00.064+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 13:47:15 policy-pap | [2024-07-03T13:45:00.066+00:00|INFO|StandardService|main] Starting service [Tomcat] 13:47:15 policy-pap | [2024-07-03T13:45:00.066+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 13:47:15 policy-pap | [2024-07-03T13:45:00.168+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 13:47:15 policy-pap | [2024-07-03T13:45:00.168+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3200 ms 13:47:15 policy-pap | [2024-07-03T13:45:00.600+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 13:47:15 policy-pap | [2024-07-03T13:45:00.657+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final 13:47:15 policy-pap | [2024-07-03T13:45:01.041+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 13:47:15 policy-pap | [2024-07-03T13:45:01.147+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@30e9ca13 13:47:15 policy-pap | [2024-07-03T13:45:01.149+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 13:47:15 policy-pap | [2024-07-03T13:45:01.181+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect 13:47:15 policy-pap | [2024-07-03T13:45:02.675+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 13:47:15 policy-pap | [2024-07-03T13:45:02.686+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 13:47:15 policy-pap | [2024-07-03T13:45:03.223+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 13:47:15 policy-pap | [2024-07-03T13:45:03.599+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 13:47:15 policy-pap | [2024-07-03T13:45:03.716+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 13:47:15 policy-pap | [2024-07-03T13:45:03.976+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:47:15 policy-pap | allow.auto.create.topics = true 13:47:15 policy-pap | auto.commit.interval.ms = 5000 13:47:15 policy-pap | auto.include.jmx.reporter = true 13:47:15 policy-pap | auto.offset.reset = latest 13:47:15 policy-pap | bootstrap.servers = [kafka:9092] 13:47:15 policy-pap | check.crcs = true 13:47:15 policy-pap | client.dns.lookup = use_all_dns_ips 13:47:15 policy-pap | client.id = consumer-5c96c918-54eb-401a-98be-aaba56deddd0-1 13:47:15 policy-pap | client.rack = 13:47:15 policy-pap | connections.max.idle.ms = 540000 13:47:15 policy-pap | default.api.timeout.ms = 60000 13:47:15 policy-pap | enable.auto.commit = true 13:47:15 policy-pap | exclude.internal.topics = true 13:47:15 policy-pap | fetch.max.bytes = 52428800 13:47:15 policy-pap | fetch.max.wait.ms = 500 13:47:15 policy-pap | fetch.min.bytes = 1 13:47:15 policy-pap | group.id = 5c96c918-54eb-401a-98be-aaba56deddd0 13:47:15 policy-pap | group.instance.id = null 13:47:15 policy-pap | heartbeat.interval.ms = 3000 13:47:15 policy-pap | interceptor.classes = [] 13:47:15 policy-pap | internal.leave.group.on.close = true 13:47:15 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 13:47:15 policy-pap | isolation.level = read_uncommitted 13:47:15 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:47:15 policy-pap | max.partition.fetch.bytes = 1048576 13:47:15 policy-pap | max.poll.interval.ms = 300000 13:47:15 policy-pap | max.poll.records = 500 13:47:15 policy-pap | metadata.max.age.ms = 300000 13:47:15 policy-pap | metric.reporters = [] 13:47:15 policy-pap | metrics.num.samples = 2 13:47:15 policy-pap | metrics.recording.level = INFO 13:47:15 policy-pap | metrics.sample.window.ms = 30000 13:47:15 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:47:15 policy-pap | receive.buffer.bytes = 65536 13:47:15 policy-pap | reconnect.backoff.max.ms = 1000 13:47:15 policy-pap | reconnect.backoff.ms = 50 13:47:15 policy-pap | request.timeout.ms = 30000 13:47:15 policy-pap | retry.backoff.ms = 100 13:47:15 policy-pap | sasl.client.callback.handler.class = null 13:47:15 policy-pap | sasl.jaas.config = null 13:47:15 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:47:15 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:47:15 policy-pap | sasl.kerberos.service.name = null 13:47:15 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:47:15 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:47:15 policy-pap | sasl.login.callback.handler.class = null 13:47:15 policy-pap | sasl.login.class = null 13:47:15 policy-pap | sasl.login.connect.timeout.ms = null 13:47:15 policy-pap | sasl.login.read.timeout.ms = null 13:47:15 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:47:15 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:47:15 policy-pap | sasl.login.refresh.window.factor = 0.8 13:47:15 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:47:15 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:47:15 policy-pap | sasl.login.retry.backoff.ms = 100 13:47:15 policy-pap | sasl.mechanism = GSSAPI 13:47:15 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:47:15 policy-pap | sasl.oauthbearer.expected.audience = null 13:47:15 policy-pap | sasl.oauthbearer.expected.issuer = null 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:47:15 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:47:15 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:47:15 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:47:15 policy-pap | security.protocol = PLAINTEXT 13:47:15 policy-pap | security.providers = null 13:47:15 policy-pap | send.buffer.bytes = 131072 13:47:15 policy-pap | session.timeout.ms = 45000 13:47:15 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:47:15 policy-pap | socket.connection.setup.timeout.ms = 10000 13:47:15 policy-pap | ssl.cipher.suites = null 13:47:15 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:47:15 policy-pap | ssl.endpoint.identification.algorithm = https 13:47:15 policy-pap | ssl.engine.factory.class = null 13:47:15 policy-pap | ssl.key.password = null 13:47:15 policy-pap | ssl.keymanager.algorithm = SunX509 13:47:15 policy-pap | ssl.keystore.certificate.chain = null 13:47:15 policy-pap | ssl.keystore.key = null 13:47:15 policy-pap | ssl.keystore.location = null 13:47:15 policy-pap | ssl.keystore.password = null 13:47:15 policy-pap | ssl.keystore.type = JKS 13:47:15 policy-pap | ssl.protocol = TLSv1.3 13:47:15 policy-pap | ssl.provider = null 13:47:15 policy-pap | ssl.secure.random.implementation = null 13:47:15 policy-pap | ssl.trustmanager.algorithm = PKIX 13:47:15 policy-pap | ssl.truststore.certificates = null 13:47:15 policy-pap | ssl.truststore.location = null 13:47:15 policy-pap | ssl.truststore.password = null 13:47:15 policy-pap | ssl.truststore.type = JKS 13:47:15 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:47:15 policy-pap | 13:47:15 policy-pap | [2024-07-03T13:45:04.139+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:47:15 policy-pap | [2024-07-03T13:45:04.139+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:47:15 policy-pap | [2024-07-03T13:45:04.139+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720014304137 13:47:15 policy-pap | [2024-07-03T13:45:04.141+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-1, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Subscribed to topic(s): policy-pdp-pap 13:47:15 policy-pap | [2024-07-03T13:45:04.142+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:47:15 policy-pap | allow.auto.create.topics = true 13:47:15 policy-pap | auto.commit.interval.ms = 5000 13:47:15 policy-pap | auto.include.jmx.reporter = true 13:47:15 policy-pap | auto.offset.reset = latest 13:47:15 policy-pap | bootstrap.servers = [kafka:9092] 13:47:15 policy-pap | check.crcs = true 13:47:15 policy-pap | client.dns.lookup = use_all_dns_ips 13:47:15 policy-pap | client.id = consumer-policy-pap-2 13:47:15 policy-pap | client.rack = 13:47:15 policy-pap | connections.max.idle.ms = 540000 13:47:15 policy-pap | default.api.timeout.ms = 60000 13:47:15 policy-pap | enable.auto.commit = true 13:47:15 policy-pap | exclude.internal.topics = true 13:47:15 policy-pap | fetch.max.bytes = 52428800 13:47:15 policy-pap | fetch.max.wait.ms = 500 13:47:15 policy-pap | fetch.min.bytes = 1 13:47:15 policy-pap | group.id = policy-pap 13:47:15 policy-pap | group.instance.id = null 13:47:15 policy-pap | heartbeat.interval.ms = 3000 13:47:15 policy-pap | interceptor.classes = [] 13:47:15 policy-pap | internal.leave.group.on.close = true 13:47:15 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 13:47:15 policy-pap | isolation.level = read_uncommitted 13:47:15 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:47:15 policy-pap | max.partition.fetch.bytes = 1048576 13:47:15 policy-pap | max.poll.interval.ms = 300000 13:47:15 policy-pap | max.poll.records = 500 13:47:15 policy-pap | metadata.max.age.ms = 300000 13:47:15 policy-pap | metric.reporters = [] 13:47:15 policy-pap | metrics.num.samples = 2 13:47:15 policy-pap | metrics.recording.level = INFO 13:47:15 policy-pap | metrics.sample.window.ms = 30000 13:47:15 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:47:15 policy-pap | receive.buffer.bytes = 65536 13:47:15 policy-pap | reconnect.backoff.max.ms = 1000 13:47:15 policy-pap | reconnect.backoff.ms = 50 13:47:15 policy-pap | request.timeout.ms = 30000 13:47:15 policy-pap | retry.backoff.ms = 100 13:47:15 policy-pap | sasl.client.callback.handler.class = null 13:47:15 policy-pap | sasl.jaas.config = null 13:47:15 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:47:15 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:47:15 policy-pap | sasl.kerberos.service.name = null 13:47:15 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:47:15 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:47:15 policy-pap | sasl.login.callback.handler.class = null 13:47:15 policy-pap | sasl.login.class = null 13:47:15 policy-pap | sasl.login.connect.timeout.ms = null 13:47:15 policy-pap | sasl.login.read.timeout.ms = null 13:47:15 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:47:15 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:47:15 policy-pap | sasl.login.refresh.window.factor = 0.8 13:47:15 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:47:15 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:47:15 policy-pap | sasl.login.retry.backoff.ms = 100 13:47:15 policy-pap | sasl.mechanism = GSSAPI 13:47:15 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:47:15 policy-pap | sasl.oauthbearer.expected.audience = null 13:47:15 policy-pap | sasl.oauthbearer.expected.issuer = null 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:47:15 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:47:15 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:47:15 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:47:15 policy-pap | security.protocol = PLAINTEXT 13:47:15 policy-pap | security.providers = null 13:47:15 policy-pap | send.buffer.bytes = 131072 13:47:15 policy-pap | session.timeout.ms = 45000 13:47:15 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:47:15 policy-pap | socket.connection.setup.timeout.ms = 10000 13:47:15 policy-pap | ssl.cipher.suites = null 13:47:15 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:47:15 policy-pap | ssl.endpoint.identification.algorithm = https 13:47:15 policy-pap | ssl.engine.factory.class = null 13:47:15 policy-pap | ssl.key.password = null 13:47:15 policy-pap | ssl.keymanager.algorithm = SunX509 13:47:15 policy-pap | ssl.keystore.certificate.chain = null 13:47:15 policy-pap | ssl.keystore.key = null 13:47:15 policy-pap | ssl.keystore.location = null 13:47:15 policy-pap | ssl.keystore.password = null 13:47:15 policy-pap | ssl.keystore.type = JKS 13:47:15 policy-pap | ssl.protocol = TLSv1.3 13:47:15 policy-pap | ssl.provider = null 13:47:15 policy-pap | ssl.secure.random.implementation = null 13:47:15 policy-pap | ssl.trustmanager.algorithm = PKIX 13:47:15 policy-pap | ssl.truststore.certificates = null 13:47:15 policy-pap | ssl.truststore.location = null 13:47:15 policy-pap | ssl.truststore.password = null 13:47:15 policy-pap | ssl.truststore.type = JKS 13:47:15 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:47:15 policy-pap | 13:47:15 policy-pap | [2024-07-03T13:45:04.147+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:47:15 policy-pap | [2024-07-03T13:45:04.147+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:47:15 policy-pap | [2024-07-03T13:45:04.147+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720014304147 13:47:15 policy-pap | [2024-07-03T13:45:04.148+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 13:47:15 policy-pap | [2024-07-03T13:45:04.496+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=xacml, supportedPolicyTypes=[onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0, onap.policies.monitoring.* 1.0.0, onap.policies.optimization.* 1.0.0, onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0, onap.policies.native.Xacml 1.0.0, onap.policies.Naming 1.0.0, onap.policies.match.* 1.0.0], policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null), PdpSubGroup(pdpType=drools, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Drools 1.0.0, onap.policies.native.drools.Controller 1.0.0, onap.policies.native.drools.Artifact 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null), PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from PapDb.json 13:47:15 policy-pap | [2024-07-03T13:45:04.633+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 13:47:15 policy-pap | [2024-07-03T13:45:04.852+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@2aeb7c4c, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@67f266bd, org.springframework.security.web.context.SecurityContextHolderFilter@7836c79, org.springframework.security.web.header.HeaderWriterFilter@26b3fe8, org.springframework.security.web.authentication.logout.LogoutFilter@d02c00, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@8c18bde, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@28269c65, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@18b58c77, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@31c0c7e5, org.springframework.security.web.access.ExceptionTranslationFilter@470b5213, org.springframework.security.web.access.intercept.AuthorizationFilter@2befb16f] 13:47:15 policy-pap | [2024-07-03T13:45:05.589+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 13:47:15 policy-pap | [2024-07-03T13:45:05.687+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 13:47:15 policy-pap | [2024-07-03T13:45:05.705+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 13:47:15 policy-pap | [2024-07-03T13:45:05.725+00:00|INFO|ServiceManager|main] Policy PAP starting 13:47:15 policy-pap | [2024-07-03T13:45:05.725+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 13:47:15 policy-pap | [2024-07-03T13:45:05.725+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 13:47:15 policy-pap | [2024-07-03T13:45:05.726+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 13:47:15 policy-pap | [2024-07-03T13:45:05.726+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 13:47:15 policy-pap | [2024-07-03T13:45:05.726+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 13:47:15 policy-pap | [2024-07-03T13:45:05.726+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 13:47:15 policy-pap | [2024-07-03T13:45:05.728+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5c96c918-54eb-401a-98be-aaba56deddd0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@236a3e4 13:47:15 policy-pap | [2024-07-03T13:45:05.740+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5c96c918-54eb-401a-98be-aaba56deddd0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 13:47:15 policy-pap | [2024-07-03T13:45:05.741+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:47:15 policy-pap | allow.auto.create.topics = true 13:47:15 policy-pap | auto.commit.interval.ms = 5000 13:47:15 policy-pap | auto.include.jmx.reporter = true 13:47:15 policy-pap | auto.offset.reset = latest 13:47:15 policy-pap | bootstrap.servers = [kafka:9092] 13:47:15 policy-pap | check.crcs = true 13:47:15 policy-pap | client.dns.lookup = use_all_dns_ips 13:47:15 policy-pap | client.id = consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3 13:47:15 policy-pap | client.rack = 13:47:15 policy-pap | connections.max.idle.ms = 540000 13:47:15 policy-pap | default.api.timeout.ms = 60000 13:47:15 policy-pap | enable.auto.commit = true 13:47:15 policy-pap | exclude.internal.topics = true 13:47:15 policy-pap | fetch.max.bytes = 52428800 13:47:15 policy-pap | fetch.max.wait.ms = 500 13:47:15 policy-pap | fetch.min.bytes = 1 13:47:15 policy-pap | group.id = 5c96c918-54eb-401a-98be-aaba56deddd0 13:47:15 policy-pap | group.instance.id = null 13:47:15 policy-pap | heartbeat.interval.ms = 3000 13:47:15 policy-pap | interceptor.classes = [] 13:47:15 policy-pap | internal.leave.group.on.close = true 13:47:15 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 13:47:15 policy-pap | isolation.level = read_uncommitted 13:47:15 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:47:15 policy-pap | max.partition.fetch.bytes = 1048576 13:47:15 policy-pap | max.poll.interval.ms = 300000 13:47:15 policy-pap | max.poll.records = 500 13:47:15 policy-pap | metadata.max.age.ms = 300000 13:47:15 policy-pap | metric.reporters = [] 13:47:15 policy-pap | metrics.num.samples = 2 13:47:15 policy-pap | metrics.recording.level = INFO 13:47:15 policy-pap | metrics.sample.window.ms = 30000 13:47:15 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:47:15 policy-pap | receive.buffer.bytes = 65536 13:47:15 policy-pap | reconnect.backoff.max.ms = 1000 13:47:15 policy-pap | reconnect.backoff.ms = 50 13:47:15 policy-pap | request.timeout.ms = 30000 13:47:15 policy-pap | retry.backoff.ms = 100 13:47:15 policy-pap | sasl.client.callback.handler.class = null 13:47:15 policy-pap | sasl.jaas.config = null 13:47:15 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:47:15 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:47:15 policy-pap | sasl.kerberos.service.name = null 13:47:15 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:47:15 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:47:15 policy-pap | sasl.login.callback.handler.class = null 13:47:15 policy-pap | sasl.login.class = null 13:47:15 policy-pap | sasl.login.connect.timeout.ms = null 13:47:15 policy-pap | sasl.login.read.timeout.ms = null 13:47:15 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:47:15 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:47:15 policy-pap | sasl.login.refresh.window.factor = 0.8 13:47:15 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:47:15 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:47:15 policy-pap | sasl.login.retry.backoff.ms = 100 13:47:15 policy-pap | sasl.mechanism = GSSAPI 13:47:15 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:47:15 policy-pap | sasl.oauthbearer.expected.audience = null 13:47:15 policy-pap | sasl.oauthbearer.expected.issuer = null 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:47:15 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:47:15 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:47:15 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:47:15 policy-pap | security.protocol = PLAINTEXT 13:47:15 policy-pap | security.providers = null 13:47:15 policy-pap | send.buffer.bytes = 131072 13:47:15 policy-pap | session.timeout.ms = 45000 13:47:15 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:47:15 policy-pap | socket.connection.setup.timeout.ms = 10000 13:47:15 policy-pap | ssl.cipher.suites = null 13:47:15 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:47:15 policy-pap | ssl.endpoint.identification.algorithm = https 13:47:15 policy-pap | ssl.engine.factory.class = null 13:47:15 policy-pap | ssl.key.password = null 13:47:15 policy-pap | ssl.keymanager.algorithm = SunX509 13:47:15 policy-pap | ssl.keystore.certificate.chain = null 13:47:15 policy-pap | ssl.keystore.key = null 13:47:15 policy-pap | ssl.keystore.location = null 13:47:15 policy-pap | ssl.keystore.password = null 13:47:15 policy-pap | ssl.keystore.type = JKS 13:47:15 policy-pap | ssl.protocol = TLSv1.3 13:47:15 policy-pap | ssl.provider = null 13:47:15 policy-pap | ssl.secure.random.implementation = null 13:47:15 policy-pap | ssl.trustmanager.algorithm = PKIX 13:47:15 policy-pap | ssl.truststore.certificates = null 13:47:15 policy-pap | ssl.truststore.location = null 13:47:15 policy-pap | ssl.truststore.password = null 13:47:15 policy-pap | ssl.truststore.type = JKS 13:47:15 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:47:15 policy-pap | 13:47:15 policy-pap | [2024-07-03T13:45:05.747+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:47:15 policy-pap | [2024-07-03T13:45:05.747+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:47:15 policy-pap | [2024-07-03T13:45:05.747+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720014305747 13:47:15 policy-pap | [2024-07-03T13:45:05.747+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Subscribed to topic(s): policy-pdp-pap 13:47:15 policy-pap | [2024-07-03T13:45:05.747+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 13:47:15 policy-pap | [2024-07-03T13:45:05.747+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=23df9267-a7d1-4c8b-b74e-6811808673c9, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2cc9a948 13:47:15 policy-pap | [2024-07-03T13:45:05.747+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=23df9267-a7d1-4c8b-b74e-6811808673c9, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 13:47:15 policy-pap | [2024-07-03T13:45:05.748+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:47:15 policy-pap | allow.auto.create.topics = true 13:47:15 policy-pap | auto.commit.interval.ms = 5000 13:47:15 policy-pap | auto.include.jmx.reporter = true 13:47:15 policy-pap | auto.offset.reset = latest 13:47:15 policy-pap | bootstrap.servers = [kafka:9092] 13:47:15 policy-pap | check.crcs = true 13:47:15 policy-pap | client.dns.lookup = use_all_dns_ips 13:47:15 policy-pap | client.id = consumer-policy-pap-4 13:47:15 policy-pap | client.rack = 13:47:15 policy-pap | connections.max.idle.ms = 540000 13:47:15 policy-pap | default.api.timeout.ms = 60000 13:47:15 policy-pap | enable.auto.commit = true 13:47:15 policy-pap | exclude.internal.topics = true 13:47:15 policy-pap | fetch.max.bytes = 52428800 13:47:15 policy-pap | fetch.max.wait.ms = 500 13:47:15 policy-pap | fetch.min.bytes = 1 13:47:15 policy-pap | group.id = policy-pap 13:47:15 policy-pap | group.instance.id = null 13:47:15 policy-pap | heartbeat.interval.ms = 3000 13:47:15 policy-pap | interceptor.classes = [] 13:47:15 policy-pap | internal.leave.group.on.close = true 13:47:15 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 13:47:15 policy-pap | isolation.level = read_uncommitted 13:47:15 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:47:15 policy-pap | max.partition.fetch.bytes = 1048576 13:47:15 policy-pap | max.poll.interval.ms = 300000 13:47:15 policy-pap | max.poll.records = 500 13:47:15 policy-pap | metadata.max.age.ms = 300000 13:47:15 policy-pap | metric.reporters = [] 13:47:15 policy-pap | metrics.num.samples = 2 13:47:15 policy-pap | metrics.recording.level = INFO 13:47:15 policy-pap | metrics.sample.window.ms = 30000 13:47:15 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:47:15 policy-pap | receive.buffer.bytes = 65536 13:47:15 policy-pap | reconnect.backoff.max.ms = 1000 13:47:15 policy-pap | reconnect.backoff.ms = 50 13:47:15 policy-pap | request.timeout.ms = 30000 13:47:15 policy-pap | retry.backoff.ms = 100 13:47:15 policy-pap | sasl.client.callback.handler.class = null 13:47:15 policy-pap | sasl.jaas.config = null 13:47:15 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:47:15 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:47:15 policy-pap | sasl.kerberos.service.name = null 13:47:15 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:47:15 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:47:15 policy-pap | sasl.login.callback.handler.class = null 13:47:15 policy-pap | sasl.login.class = null 13:47:15 policy-pap | sasl.login.connect.timeout.ms = null 13:47:15 policy-pap | sasl.login.read.timeout.ms = null 13:47:15 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:47:15 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:47:15 policy-pap | sasl.login.refresh.window.factor = 0.8 13:47:15 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:47:15 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:47:15 policy-pap | sasl.login.retry.backoff.ms = 100 13:47:15 policy-pap | sasl.mechanism = GSSAPI 13:47:15 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:47:15 policy-pap | sasl.oauthbearer.expected.audience = null 13:47:15 policy-pap | sasl.oauthbearer.expected.issuer = null 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:47:15 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:47:15 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:47:15 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:47:15 policy-pap | security.protocol = PLAINTEXT 13:47:15 policy-pap | security.providers = null 13:47:15 policy-pap | send.buffer.bytes = 131072 13:47:15 policy-pap | session.timeout.ms = 45000 13:47:15 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:47:15 policy-pap | socket.connection.setup.timeout.ms = 10000 13:47:15 policy-pap | ssl.cipher.suites = null 13:47:15 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:47:15 policy-pap | ssl.endpoint.identification.algorithm = https 13:47:15 policy-pap | ssl.engine.factory.class = null 13:47:15 policy-pap | ssl.key.password = null 13:47:15 policy-pap | ssl.keymanager.algorithm = SunX509 13:47:15 policy-pap | ssl.keystore.certificate.chain = null 13:47:15 policy-pap | ssl.keystore.key = null 13:47:15 policy-pap | ssl.keystore.location = null 13:47:15 policy-pap | ssl.keystore.password = null 13:47:15 policy-pap | ssl.keystore.type = JKS 13:47:15 policy-pap | ssl.protocol = TLSv1.3 13:47:15 policy-pap | ssl.provider = null 13:47:15 policy-pap | ssl.secure.random.implementation = null 13:47:15 policy-pap | ssl.trustmanager.algorithm = PKIX 13:47:15 policy-pap | ssl.truststore.certificates = null 13:47:15 policy-pap | ssl.truststore.location = null 13:47:15 policy-pap | ssl.truststore.password = null 13:47:15 policy-pap | ssl.truststore.type = JKS 13:47:15 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:47:15 policy-pap | 13:47:15 policy-pap | [2024-07-03T13:45:05.752+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:47:15 policy-pap | [2024-07-03T13:45:05.752+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:47:15 policy-pap | [2024-07-03T13:45:05.752+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720014305752 13:47:15 policy-pap | [2024-07-03T13:45:05.752+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 13:47:15 policy-pap | [2024-07-03T13:45:05.753+00:00|INFO|ServiceManager|main] Policy PAP starting topics 13:47:15 policy-pap | [2024-07-03T13:45:05.753+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=23df9267-a7d1-4c8b-b74e-6811808673c9, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 13:47:15 policy-pap | [2024-07-03T13:45:05.753+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5c96c918-54eb-401a-98be-aaba56deddd0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 13:47:15 policy-pap | [2024-07-03T13:45:05.753+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6eced121-f71e-4203-8d80-0180c59a9055, alive=false, publisher=null]]: starting 13:47:15 policy-pap | [2024-07-03T13:45:05.768+00:00|INFO|ProducerConfig|main] ProducerConfig values: 13:47:15 policy-pap | acks = -1 13:47:15 policy-pap | auto.include.jmx.reporter = true 13:47:15 policy-pap | batch.size = 16384 13:47:15 policy-pap | bootstrap.servers = [kafka:9092] 13:47:15 policy-pap | buffer.memory = 33554432 13:47:15 policy-pap | client.dns.lookup = use_all_dns_ips 13:47:15 policy-pap | client.id = producer-1 13:47:15 policy-pap | compression.type = none 13:47:15 policy-pap | connections.max.idle.ms = 540000 13:47:15 policy-pap | delivery.timeout.ms = 120000 13:47:15 policy-pap | enable.idempotence = true 13:47:15 policy-pap | interceptor.classes = [] 13:47:15 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:47:15 policy-pap | linger.ms = 0 13:47:15 policy-pap | max.block.ms = 60000 13:47:15 policy-pap | max.in.flight.requests.per.connection = 5 13:47:15 policy-pap | max.request.size = 1048576 13:47:15 policy-pap | metadata.max.age.ms = 300000 13:47:15 policy-pap | metadata.max.idle.ms = 300000 13:47:15 policy-pap | metric.reporters = [] 13:47:15 policy-pap | metrics.num.samples = 2 13:47:15 policy-pap | metrics.recording.level = INFO 13:47:15 policy-pap | metrics.sample.window.ms = 30000 13:47:15 policy-pap | partitioner.adaptive.partitioning.enable = true 13:47:15 policy-pap | partitioner.availability.timeout.ms = 0 13:47:15 policy-pap | partitioner.class = null 13:47:15 policy-pap | partitioner.ignore.keys = false 13:47:15 policy-pap | receive.buffer.bytes = 32768 13:47:15 policy-pap | reconnect.backoff.max.ms = 1000 13:47:15 policy-pap | reconnect.backoff.ms = 50 13:47:15 policy-pap | request.timeout.ms = 30000 13:47:15 policy-pap | retries = 2147483647 13:47:15 policy-pap | retry.backoff.ms = 100 13:47:15 policy-pap | sasl.client.callback.handler.class = null 13:47:15 policy-pap | sasl.jaas.config = null 13:47:15 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:47:15 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:47:15 policy-pap | sasl.kerberos.service.name = null 13:47:15 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:47:15 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:47:15 policy-pap | sasl.login.callback.handler.class = null 13:47:15 policy-pap | sasl.login.class = null 13:47:15 policy-pap | sasl.login.connect.timeout.ms = null 13:47:15 policy-pap | sasl.login.read.timeout.ms = null 13:47:15 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:47:15 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:47:15 policy-pap | sasl.login.refresh.window.factor = 0.8 13:47:15 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:47:15 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:47:15 policy-pap | sasl.login.retry.backoff.ms = 100 13:47:15 policy-pap | sasl.mechanism = GSSAPI 13:47:15 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:47:15 policy-pap | sasl.oauthbearer.expected.audience = null 13:47:15 policy-pap | sasl.oauthbearer.expected.issuer = null 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:47:15 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:47:15 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:47:15 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:47:15 policy-pap | security.protocol = PLAINTEXT 13:47:15 policy-pap | security.providers = null 13:47:15 policy-pap | send.buffer.bytes = 131072 13:47:15 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:47:15 policy-pap | socket.connection.setup.timeout.ms = 10000 13:47:15 policy-pap | ssl.cipher.suites = null 13:47:15 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:47:15 policy-pap | ssl.endpoint.identification.algorithm = https 13:47:15 policy-pap | ssl.engine.factory.class = null 13:47:15 policy-pap | ssl.key.password = null 13:47:15 policy-pap | ssl.keymanager.algorithm = SunX509 13:47:15 policy-pap | ssl.keystore.certificate.chain = null 13:47:15 policy-pap | ssl.keystore.key = null 13:47:15 policy-pap | ssl.keystore.location = null 13:47:15 policy-pap | ssl.keystore.password = null 13:47:15 policy-pap | ssl.keystore.type = JKS 13:47:15 policy-pap | ssl.protocol = TLSv1.3 13:47:15 policy-pap | ssl.provider = null 13:47:15 policy-pap | ssl.secure.random.implementation = null 13:47:15 policy-pap | ssl.trustmanager.algorithm = PKIX 13:47:15 policy-pap | ssl.truststore.certificates = null 13:47:15 policy-pap | ssl.truststore.location = null 13:47:15 policy-pap | ssl.truststore.password = null 13:47:15 policy-pap | ssl.truststore.type = JKS 13:47:15 policy-pap | transaction.timeout.ms = 60000 13:47:15 policy-pap | transactional.id = null 13:47:15 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:47:15 policy-pap | 13:47:15 policy-pap | [2024-07-03T13:45:05.778+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 13:47:15 policy-pap | [2024-07-03T13:45:05.794+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:47:15 policy-pap | [2024-07-03T13:45:05.794+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:47:15 policy-pap | [2024-07-03T13:45:05.795+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720014305794 13:47:15 policy-pap | [2024-07-03T13:45:05.795+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6eced121-f71e-4203-8d80-0180c59a9055, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 13:47:15 policy-pap | [2024-07-03T13:45:05.795+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=85ef31d0-ee20-416d-8320-5666b2f52b10, alive=false, publisher=null]]: starting 13:47:15 policy-pap | [2024-07-03T13:45:05.795+00:00|INFO|ProducerConfig|main] ProducerConfig values: 13:47:15 policy-pap | acks = -1 13:47:15 policy-pap | auto.include.jmx.reporter = true 13:47:15 policy-pap | batch.size = 16384 13:47:15 policy-pap | bootstrap.servers = [kafka:9092] 13:47:15 policy-pap | buffer.memory = 33554432 13:47:15 policy-pap | client.dns.lookup = use_all_dns_ips 13:47:15 policy-pap | client.id = producer-2 13:47:15 policy-pap | compression.type = none 13:47:15 policy-pap | connections.max.idle.ms = 540000 13:47:15 policy-pap | delivery.timeout.ms = 120000 13:47:15 policy-pap | enable.idempotence = true 13:47:15 policy-pap | interceptor.classes = [] 13:47:15 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:47:15 policy-pap | linger.ms = 0 13:47:15 policy-pap | max.block.ms = 60000 13:47:15 policy-pap | max.in.flight.requests.per.connection = 5 13:47:15 policy-pap | max.request.size = 1048576 13:47:15 policy-pap | metadata.max.age.ms = 300000 13:47:15 policy-pap | metadata.max.idle.ms = 300000 13:47:15 policy-pap | metric.reporters = [] 13:47:15 policy-pap | metrics.num.samples = 2 13:47:15 policy-pap | metrics.recording.level = INFO 13:47:15 policy-pap | metrics.sample.window.ms = 30000 13:47:15 policy-pap | partitioner.adaptive.partitioning.enable = true 13:47:15 policy-pap | partitioner.availability.timeout.ms = 0 13:47:15 policy-pap | partitioner.class = null 13:47:15 policy-pap | partitioner.ignore.keys = false 13:47:15 policy-pap | receive.buffer.bytes = 32768 13:47:15 policy-pap | reconnect.backoff.max.ms = 1000 13:47:15 policy-pap | reconnect.backoff.ms = 50 13:47:15 policy-pap | request.timeout.ms = 30000 13:47:15 policy-pap | retries = 2147483647 13:47:15 policy-pap | retry.backoff.ms = 100 13:47:15 policy-pap | sasl.client.callback.handler.class = null 13:47:15 policy-pap | sasl.jaas.config = null 13:47:15 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:47:15 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:47:15 policy-pap | sasl.kerberos.service.name = null 13:47:15 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:47:15 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:47:15 policy-pap | sasl.login.callback.handler.class = null 13:47:15 policy-pap | sasl.login.class = null 13:47:15 policy-pap | sasl.login.connect.timeout.ms = null 13:47:15 policy-pap | sasl.login.read.timeout.ms = null 13:47:15 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:47:15 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:47:15 policy-pap | sasl.login.refresh.window.factor = 0.8 13:47:15 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:47:15 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:47:15 policy-pap | sasl.login.retry.backoff.ms = 100 13:47:15 policy-pap | sasl.mechanism = GSSAPI 13:47:15 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:47:15 policy-pap | sasl.oauthbearer.expected.audience = null 13:47:15 policy-pap | sasl.oauthbearer.expected.issuer = null 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:47:15 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:47:15 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:47:15 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:47:15 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:47:15 policy-pap | security.protocol = PLAINTEXT 13:47:15 policy-pap | security.providers = null 13:47:15 policy-pap | send.buffer.bytes = 131072 13:47:15 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:47:15 policy-pap | socket.connection.setup.timeout.ms = 10000 13:47:15 policy-pap | ssl.cipher.suites = null 13:47:15 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:47:15 policy-pap | ssl.endpoint.identification.algorithm = https 13:47:15 policy-pap | ssl.engine.factory.class = null 13:47:15 policy-pap | ssl.key.password = null 13:47:15 policy-pap | ssl.keymanager.algorithm = SunX509 13:47:15 policy-pap | ssl.keystore.certificate.chain = null 13:47:15 policy-pap | ssl.keystore.key = null 13:47:15 policy-pap | ssl.keystore.location = null 13:47:15 policy-pap | ssl.keystore.password = null 13:47:15 policy-pap | ssl.keystore.type = JKS 13:47:15 policy-pap | ssl.protocol = TLSv1.3 13:47:15 policy-pap | ssl.provider = null 13:47:15 policy-pap | ssl.secure.random.implementation = null 13:47:15 policy-pap | ssl.trustmanager.algorithm = PKIX 13:47:15 policy-pap | ssl.truststore.certificates = null 13:47:15 policy-pap | ssl.truststore.location = null 13:47:15 policy-pap | ssl.truststore.password = null 13:47:15 policy-pap | ssl.truststore.type = JKS 13:47:15 policy-pap | transaction.timeout.ms = 60000 13:47:15 policy-pap | transactional.id = null 13:47:15 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:47:15 policy-pap | 13:47:15 policy-pap | [2024-07-03T13:45:05.796+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 13:47:15 policy-pap | [2024-07-03T13:45:05.799+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:47:15 policy-pap | [2024-07-03T13:45:05.799+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:47:15 policy-pap | [2024-07-03T13:45:05.799+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720014305799 13:47:15 policy-pap | [2024-07-03T13:45:05.799+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=85ef31d0-ee20-416d-8320-5666b2f52b10, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 13:47:15 policy-pap | [2024-07-03T13:45:05.799+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 13:47:15 policy-pap | [2024-07-03T13:45:05.799+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 13:47:15 policy-pap | [2024-07-03T13:45:05.800+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 13:47:15 policy-pap | [2024-07-03T13:45:05.801+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 13:47:15 policy-pap | [2024-07-03T13:45:05.804+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 13:47:15 policy-pap | [2024-07-03T13:45:05.805+00:00|INFO|TimerManager|Thread-9] timer manager update started 13:47:15 policy-pap | [2024-07-03T13:45:05.805+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 13:47:15 policy-pap | [2024-07-03T13:45:05.806+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 13:47:15 policy-pap | [2024-07-03T13:45:05.806+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 13:47:15 policy-pap | [2024-07-03T13:45:05.806+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 13:47:15 policy-pap | [2024-07-03T13:45:05.810+00:00|INFO|ServiceManager|main] Policy PAP started 13:47:15 policy-pap | [2024-07-03T13:45:05.824+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.725 seconds (process running for 10.358) 13:47:15 policy-pap | [2024-07-03T13:45:06.190+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 13:47:15 policy-pap | [2024-07-03T13:45:06.190+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: ZzyuOxDrRguYnYKgclDwYw 13:47:15 policy-pap | [2024-07-03T13:45:06.190+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: ZzyuOxDrRguYnYKgclDwYw 13:47:15 policy-pap | [2024-07-03T13:45:06.190+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: ZzyuOxDrRguYnYKgclDwYw 13:47:15 policy-pap | [2024-07-03T13:45:06.294+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 13:47:15 policy-pap | [2024-07-03T13:45:06.313+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 13:47:15 policy-pap | [2024-07-03T13:45:06.318+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 13:47:15 policy-pap | [2024-07-03T13:45:06.346+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:47:15 policy-pap | [2024-07-03T13:45:06.346+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Cluster ID: ZzyuOxDrRguYnYKgclDwYw 13:47:15 policy-pap | [2024-07-03T13:45:06.406+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 5 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:47:15 policy-pap | [2024-07-03T13:45:06.469+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:47:15 policy-pap | [2024-07-03T13:45:06.527+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:47:15 policy-pap | [2024-07-03T13:45:06.578+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:47:15 policy-pap | [2024-07-03T13:45:06.633+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:47:15 policy-pap | [2024-07-03T13:45:06.687+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:47:15 policy-pap | [2024-07-03T13:45:06.738+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:47:15 policy-pap | [2024-07-03T13:45:06.793+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:47:15 policy-pap | [2024-07-03T13:45:06.843+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 13 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:47:15 policy-pap | [2024-07-03T13:45:06.898+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:47:15 policy-pap | [2024-07-03T13:45:06.948+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 15 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:47:15 policy-pap | [2024-07-03T13:45:07.003+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:47:15 policy-pap | [2024-07-03T13:45:07.054+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 17 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:47:15 policy-pap | [2024-07-03T13:45:07.110+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:47:15 policy-pap | [2024-07-03T13:45:07.159+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 19 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:47:15 policy-pap | [2024-07-03T13:45:07.223+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:47:15 policy-pap | [2024-07-03T13:45:07.270+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 13:47:15 policy-pap | [2024-07-03T13:45:07.277+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 13:47:15 policy-pap | [2024-07-03T13:45:07.300+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-22adce61-2bf5-4ed8-8fb3-c45a45f44abb 13:47:15 policy-pap | [2024-07-03T13:45:07.300+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 13:47:15 policy-pap | [2024-07-03T13:45:07.301+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 13:47:15 policy-pap | [2024-07-03T13:45:07.327+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 13:47:15 policy-pap | [2024-07-03T13:45:07.329+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] (Re-)joining group 13:47:15 policy-pap | [2024-07-03T13:45:07.332+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Request joining group due to: need to re-join with the given member-id: consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3-742e9a4e-ecbb-4557-ba8f-8319bc1a4974 13:47:15 policy-pap | [2024-07-03T13:45:07.333+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 13:47:15 policy-pap | [2024-07-03T13:45:07.333+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] (Re-)joining group 13:47:15 policy-pap | [2024-07-03T13:45:10.323+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-22adce61-2bf5-4ed8-8fb3-c45a45f44abb', protocol='range'} 13:47:15 policy-pap | [2024-07-03T13:45:10.330+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-22adce61-2bf5-4ed8-8fb3-c45a45f44abb=Assignment(partitions=[policy-pdp-pap-0])} 13:47:15 policy-pap | [2024-07-03T13:45:10.343+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Successfully joined group with generation Generation{generationId=1, memberId='consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3-742e9a4e-ecbb-4557-ba8f-8319bc1a4974', protocol='range'} 13:47:15 policy-pap | [2024-07-03T13:45:10.344+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Finished assignment for group at generation 1: {consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3-742e9a4e-ecbb-4557-ba8f-8319bc1a4974=Assignment(partitions=[policy-pdp-pap-0])} 13:47:15 policy-pap | [2024-07-03T13:45:10.356+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Successfully synced group in generation Generation{generationId=1, memberId='consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3-742e9a4e-ecbb-4557-ba8f-8319bc1a4974', protocol='range'} 13:47:15 policy-pap | [2024-07-03T13:45:10.357+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-22adce61-2bf5-4ed8-8fb3-c45a45f44abb', protocol='range'} 13:47:15 policy-pap | [2024-07-03T13:45:10.357+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 13:47:15 policy-pap | [2024-07-03T13:45:10.357+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 13:47:15 policy-pap | [2024-07-03T13:45:10.361+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Adding newly assigned partitions: policy-pdp-pap-0 13:47:15 policy-pap | [2024-07-03T13:45:10.361+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 13:47:15 policy-pap | [2024-07-03T13:45:10.384+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Found no committed offset for partition policy-pdp-pap-0 13:47:15 policy-pap | [2024-07-03T13:45:10.384+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 13:47:15 policy-pap | [2024-07-03T13:45:10.399+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5c96c918-54eb-401a-98be-aaba56deddd0-3, groupId=5c96c918-54eb-401a-98be-aaba56deddd0] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 13:47:15 policy-pap | [2024-07-03T13:45:10.399+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 13:47:15 policy-pap | [2024-07-03T13:45:27.754+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 13:47:15 policy-pap | [] 13:47:15 policy-pap | [2024-07-03T13:45:27.755+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:47:15 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c6b3afb6-5983-4d84-968c-c30ffc9c944c","timestampMs":1720014327716,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup"} 13:47:15 policy-pap | [2024-07-03T13:45:27.755+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:47:15 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c6b3afb6-5983-4d84-968c-c30ffc9c944c","timestampMs":1720014327716,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup"} 13:47:15 policy-pap | [2024-07-03T13:45:27.764+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 13:47:15 policy-pap | [2024-07-03T13:45:27.910+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate starting 13:47:15 policy-pap | [2024-07-03T13:45:27.910+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate starting listener 13:47:15 policy-pap | [2024-07-03T13:45:27.911+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate starting timer 13:47:15 policy-pap | [2024-07-03T13:45:27.911+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=744fd4d8-3fd6-4e55-83d0-ceced1eaca18, expireMs=1720014357911] 13:47:15 policy-pap | [2024-07-03T13:45:27.913+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate starting enqueue 13:47:15 policy-pap | [2024-07-03T13:45:27.913+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=744fd4d8-3fd6-4e55-83d0-ceced1eaca18, expireMs=1720014357911] 13:47:15 policy-pap | [2024-07-03T13:45:27.913+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate started 13:47:15 policy-pap | [2024-07-03T13:45:27.920+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 13:47:15 policy-pap | {"source":"pap-9555d4a6-4530-4d70-9a52-ca28b9fb7cb3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"744fd4d8-3fd6-4e55-83d0-ceced1eaca18","timestampMs":1720014327889,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-pap | [2024-07-03T13:45:27.979+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:47:15 policy-pap | {"source":"pap-9555d4a6-4530-4d70-9a52-ca28b9fb7cb3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"744fd4d8-3fd6-4e55-83d0-ceced1eaca18","timestampMs":1720014327889,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-pap | [2024-07-03T13:45:27.980+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 13:47:15 policy-pap | [2024-07-03T13:45:27.980+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:47:15 policy-pap | {"source":"pap-9555d4a6-4530-4d70-9a52-ca28b9fb7cb3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"744fd4d8-3fd6-4e55-83d0-ceced1eaca18","timestampMs":1720014327889,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-pap | [2024-07-03T13:45:27.981+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 13:47:15 policy-pap | [2024-07-03T13:45:27.997+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:47:15 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"13b5dd4f-ae45-4e1b-a9c6-030fb5b6080f","timestampMs":1720014327987,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup"} 13:47:15 policy-pap | [2024-07-03T13:45:27.997+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:47:15 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"13b5dd4f-ae45-4e1b-a9c6-030fb5b6080f","timestampMs":1720014327987,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup"} 13:47:15 policy-pap | [2024-07-03T13:45:27.998+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 13:47:15 policy-pap | [2024-07-03T13:45:28.005+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:47:15 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"744fd4d8-3fd6-4e55-83d0-ceced1eaca18","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"b5c5a65d-ae64-44a4-b17e-60cdf59c8af9","timestampMs":1720014327988,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-pap | [2024-07-03T13:45:28.027+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate stopping 13:47:15 policy-pap | [2024-07-03T13:45:28.027+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate stopping enqueue 13:47:15 policy-pap | [2024-07-03T13:45:28.027+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate stopping timer 13:47:15 policy-pap | [2024-07-03T13:45:28.027+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=744fd4d8-3fd6-4e55-83d0-ceced1eaca18, expireMs=1720014357911] 13:47:15 policy-pap | [2024-07-03T13:45:28.027+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate stopping listener 13:47:15 policy-pap | [2024-07-03T13:45:28.027+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate stopped 13:47:15 policy-pap | [2024-07-03T13:45:28.032+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate successful 13:47:15 policy-pap | [2024-07-03T13:45:28.032+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec start publishing next request 13:47:15 policy-pap | [2024-07-03T13:45:28.032+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpStateChange starting 13:47:15 policy-pap | [2024-07-03T13:45:28.032+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpStateChange starting listener 13:47:15 policy-pap | [2024-07-03T13:45:28.032+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpStateChange starting timer 13:47:15 policy-pap | [2024-07-03T13:45:28.032+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=5cdb284f-b3fe-4dbc-9e57-bb99f28b3e9d, expireMs=1720014358032] 13:47:15 policy-pap | [2024-07-03T13:45:28.032+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=5cdb284f-b3fe-4dbc-9e57-bb99f28b3e9d, expireMs=1720014358032] 13:47:15 policy-pap | [2024-07-03T13:45:28.032+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpStateChange starting enqueue 13:47:15 policy-pap | [2024-07-03T13:45:28.032+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpStateChange started 13:47:15 policy-pap | [2024-07-03T13:45:28.033+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:47:15 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"744fd4d8-3fd6-4e55-83d0-ceced1eaca18","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"b5c5a65d-ae64-44a4-b17e-60cdf59c8af9","timestampMs":1720014327988,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-pap | [2024-07-03T13:45:28.033+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 744fd4d8-3fd6-4e55-83d0-ceced1eaca18 13:47:15 policy-pap | [2024-07-03T13:45:28.033+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 13:47:15 policy-pap | {"source":"pap-9555d4a6-4530-4d70-9a52-ca28b9fb7cb3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"5cdb284f-b3fe-4dbc-9e57-bb99f28b3e9d","timestampMs":1720014327890,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-pap | [2024-07-03T13:45:28.046+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:47:15 policy-pap | {"source":"pap-9555d4a6-4530-4d70-9a52-ca28b9fb7cb3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"5cdb284f-b3fe-4dbc-9e57-bb99f28b3e9d","timestampMs":1720014327890,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-pap | [2024-07-03T13:45:28.046+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 13:47:15 policy-pap | [2024-07-03T13:45:28.053+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:47:15 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"5cdb284f-b3fe-4dbc-9e57-bb99f28b3e9d","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"f429decc-52d3-4d6c-b2ff-6e8180242b6d","timestampMs":1720014328045,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-pap | [2024-07-03T13:45:28.054+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 5cdb284f-b3fe-4dbc-9e57-bb99f28b3e9d 13:47:15 policy-pap | [2024-07-03T13:45:28.063+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:47:15 policy-pap | {"source":"pap-9555d4a6-4530-4d70-9a52-ca28b9fb7cb3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"5cdb284f-b3fe-4dbc-9e57-bb99f28b3e9d","timestampMs":1720014327890,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-pap | [2024-07-03T13:45:28.063+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 13:47:15 policy-pap | [2024-07-03T13:45:28.066+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:47:15 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"5cdb284f-b3fe-4dbc-9e57-bb99f28b3e9d","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"f429decc-52d3-4d6c-b2ff-6e8180242b6d","timestampMs":1720014328045,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-pap | [2024-07-03T13:45:28.066+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpStateChange stopping 13:47:15 policy-pap | [2024-07-03T13:45:28.066+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpStateChange stopping enqueue 13:47:15 policy-pap | [2024-07-03T13:45:28.066+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpStateChange stopping timer 13:47:15 policy-pap | [2024-07-03T13:45:28.066+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=5cdb284f-b3fe-4dbc-9e57-bb99f28b3e9d, expireMs=1720014358032] 13:47:15 policy-pap | [2024-07-03T13:45:28.066+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpStateChange stopping listener 13:47:15 policy-pap | [2024-07-03T13:45:28.066+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpStateChange stopped 13:47:15 policy-pap | [2024-07-03T13:45:28.066+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpStateChange successful 13:47:15 policy-pap | [2024-07-03T13:45:28.066+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec start publishing next request 13:47:15 policy-pap | [2024-07-03T13:45:28.067+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate starting 13:47:15 policy-pap | [2024-07-03T13:45:28.067+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate starting listener 13:47:15 policy-pap | [2024-07-03T13:45:28.067+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate starting timer 13:47:15 policy-pap | [2024-07-03T13:45:28.067+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=a00cdd27-8378-4ad8-915a-c211ea2a58e0, expireMs=1720014358067] 13:47:15 policy-pap | [2024-07-03T13:45:28.067+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate starting enqueue 13:47:15 policy-pap | [2024-07-03T13:45:28.067+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate started 13:47:15 policy-pap | [2024-07-03T13:45:28.067+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 13:47:15 policy-pap | {"source":"pap-9555d4a6-4530-4d70-9a52-ca28b9fb7cb3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a00cdd27-8378-4ad8-915a-c211ea2a58e0","timestampMs":1720014328057,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-pap | [2024-07-03T13:45:28.076+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:47:15 policy-pap | {"source":"pap-9555d4a6-4530-4d70-9a52-ca28b9fb7cb3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a00cdd27-8378-4ad8-915a-c211ea2a58e0","timestampMs":1720014328057,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-pap | [2024-07-03T13:45:28.076+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:47:15 policy-pap | {"source":"pap-9555d4a6-4530-4d70-9a52-ca28b9fb7cb3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a00cdd27-8378-4ad8-915a-c211ea2a58e0","timestampMs":1720014328057,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-pap | [2024-07-03T13:45:28.078+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 13:47:15 policy-pap | [2024-07-03T13:45:28.078+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 13:47:15 policy-pap | [2024-07-03T13:45:28.085+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:47:15 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a00cdd27-8378-4ad8-915a-c211ea2a58e0","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"a3dce925-a4cd-4f29-a70a-e07aabbfae63","timestampMs":1720014328077,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-pap | [2024-07-03T13:45:28.085+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id a00cdd27-8378-4ad8-915a-c211ea2a58e0 13:47:15 policy-pap | [2024-07-03T13:45:28.086+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:47:15 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a00cdd27-8378-4ad8-915a-c211ea2a58e0","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"a3dce925-a4cd-4f29-a70a-e07aabbfae63","timestampMs":1720014328077,"name":"apex-6914ad69-ebda-46d7-accf-f32642b87cec","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:47:15 policy-pap | [2024-07-03T13:45:28.086+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate stopping 13:47:15 policy-pap | [2024-07-03T13:45:28.086+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate stopping enqueue 13:47:15 policy-pap | [2024-07-03T13:45:28.086+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate stopping timer 13:47:15 policy-pap | [2024-07-03T13:45:28.086+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=a00cdd27-8378-4ad8-915a-c211ea2a58e0, expireMs=1720014358067] 13:47:15 policy-pap | [2024-07-03T13:45:28.086+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate stopping listener 13:47:15 policy-pap | [2024-07-03T13:45:28.086+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate stopped 13:47:15 policy-pap | [2024-07-03T13:45:28.090+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec PdpUpdate successful 13:47:15 policy-pap | [2024-07-03T13:45:28.090+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-6914ad69-ebda-46d7-accf-f32642b87cec has no more requests 13:47:15 policy-pap | [2024-07-03T13:45:41.591+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 13:47:15 policy-pap | [2024-07-03T13:45:41.591+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 13:47:15 policy-pap | [2024-07-03T13:45:41.593+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 2 ms 13:47:15 policy-pap | [2024-07-03T13:45:57.912+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=744fd4d8-3fd6-4e55-83d0-ceced1eaca18, expireMs=1720014357911] 13:47:15 policy-pap | [2024-07-03T13:45:58.032+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=5cdb284f-b3fe-4dbc-9e57-bb99f28b3e9d, expireMs=1720014358032] 13:47:15 policy-pap | [2024-07-03T13:47:05.807+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 13:47:15 =================================== 13:47:15 ======== Logs from prometheus ======== 13:47:15 prometheus | ts=2024-07-03T13:44:22.576Z caller=main.go:589 level=info msg="No time or size retention was set so using the default time retention" duration=15d 13:47:15 prometheus | ts=2024-07-03T13:44:22.576Z caller=main.go:633 level=info msg="Starting Prometheus Server" mode=server version="(version=2.53.0, branch=HEAD, revision=4c35b9250afefede41c5f5acd76191f90f625898)" 13:47:15 prometheus | ts=2024-07-03T13:44:22.576Z caller=main.go:638 level=info build_context="(go=go1.22.4, platform=linux/amd64, user=root@7f8d89cbbd64, date=20240619-07:39:12, tags=netgo,builtinassets,stringlabels)" 13:47:15 prometheus | ts=2024-07-03T13:44:22.576Z caller=main.go:639 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 13:47:15 prometheus | ts=2024-07-03T13:44:22.576Z caller=main.go:640 level=info fd_limits="(soft=1048576, hard=1048576)" 13:47:15 prometheus | ts=2024-07-03T13:44:22.576Z caller=main.go:641 level=info vm_limits="(soft=unlimited, hard=unlimited)" 13:47:15 prometheus | ts=2024-07-03T13:44:22.578Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 13:47:15 prometheus | ts=2024-07-03T13:44:22.579Z caller=main.go:1148 level=info msg="Starting TSDB ..." 13:47:15 prometheus | ts=2024-07-03T13:44:22.584Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 13:47:15 prometheus | ts=2024-07-03T13:44:22.584Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 13:47:15 prometheus | ts=2024-07-03T13:44:22.588Z caller=head.go:626 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 13:47:15 prometheus | ts=2024-07-03T13:44:22.588Z caller=head.go:713 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.36µs 13:47:15 prometheus | ts=2024-07-03T13:44:22.588Z caller=head.go:721 level=info component=tsdb msg="Replaying WAL, this may take a while" 13:47:15 prometheus | ts=2024-07-03T13:44:22.589Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 13:47:15 prometheus | ts=2024-07-03T13:44:22.589Z caller=head.go:830 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=138.182µs wal_replay_duration=369.776µs wbl_replay_duration=300ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=2.36µs total_replay_duration=651.281µs 13:47:15 prometheus | ts=2024-07-03T13:44:22.592Z caller=main.go:1169 level=info fs_type=EXT4_SUPER_MAGIC 13:47:15 prometheus | ts=2024-07-03T13:44:22.592Z caller=main.go:1172 level=info msg="TSDB started" 13:47:15 prometheus | ts=2024-07-03T13:44:22.592Z caller=main.go:1354 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 13:47:15 prometheus | ts=2024-07-03T13:44:22.593Z caller=main.go:1391 level=info msg="updated GOGC" old=100 new=75 13:47:15 prometheus | ts=2024-07-03T13:44:22.593Z caller=main.go:1402 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.29693ms db_storage=1.77µs remote_storage=2.03µs web_handler=1.01µs query_engine=1.16µs scrape=274.045µs scrape_sd=148.982µs notify=31.191µs notify_sd=12.59µs rules=2.33µs tracing=8.14µs 13:47:15 prometheus | ts=2024-07-03T13:44:22.593Z caller=main.go:1133 level=info msg="Server is ready to receive web requests." 13:47:15 prometheus | ts=2024-07-03T13:44:22.593Z caller=manager.go:164 level=info component="rule manager" msg="Starting rule manager..." 13:47:15 =================================== 13:47:15 ======== Logs from simulator ======== 13:47:15 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 13:47:15 simulator | overriding logback.xml 13:47:15 simulator | 2024-07-03 13:44:21,021 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 13:47:15 simulator | 2024-07-03 13:44:21,097 INFO org.onap.policy.models.simulators starting 13:47:15 simulator | 2024-07-03 13:44:21,098 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 13:47:15 simulator | 2024-07-03 13:44:21,287 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 13:47:15 simulator | 2024-07-03 13:44:21,289 INFO org.onap.policy.models.simulators starting A&AI simulator 13:47:15 simulator | 2024-07-03 13:44:21,402 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 13:47:15 simulator | 2024-07-03 13:44:21,413 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:47:15 simulator | 2024-07-03 13:44:21,415 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:47:15 simulator | 2024-07-03 13:44:21,423 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 13:47:15 simulator | 2024-07-03 13:44:21,485 INFO Session workerName=node0 13:47:15 simulator | 2024-07-03 13:44:22,013 INFO Using GSON for REST calls 13:47:15 simulator | 2024-07-03 13:44:22,109 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} 13:47:15 simulator | 2024-07-03 13:44:22,122 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 13:47:15 simulator | 2024-07-03 13:44:22,131 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1574ms 13:47:15 simulator | 2024-07-03 13:44:22,131 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4284 ms. 13:47:15 simulator | 2024-07-03 13:44:22,136 INFO org.onap.policy.models.simulators starting SDNC simulator 13:47:15 simulator | 2024-07-03 13:44:22,139 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 13:47:15 simulator | 2024-07-03 13:44:22,139 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:47:15 simulator | 2024-07-03 13:44:22,140 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:47:15 simulator | 2024-07-03 13:44:22,141 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 13:47:15 simulator | 2024-07-03 13:44:22,158 INFO Session workerName=node0 13:47:15 simulator | 2024-07-03 13:44:22,244 INFO Using GSON for REST calls 13:47:15 simulator | 2024-07-03 13:44:22,255 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} 13:47:15 simulator | 2024-07-03 13:44:22,258 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 13:47:15 simulator | 2024-07-03 13:44:22,258 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1701ms 13:47:15 simulator | 2024-07-03 13:44:22,258 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4882 ms. 13:47:15 simulator | 2024-07-03 13:44:22,286 INFO org.onap.policy.models.simulators starting SO simulator 13:47:15 simulator | 2024-07-03 13:44:22,291 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 13:47:15 simulator | 2024-07-03 13:44:22,292 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:47:15 simulator | 2024-07-03 13:44:22,293 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:47:15 simulator | 2024-07-03 13:44:22,294 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 13:47:15 simulator | 2024-07-03 13:44:22,308 INFO Session workerName=node0 13:47:15 simulator | 2024-07-03 13:44:22,374 INFO Using GSON for REST calls 13:47:15 simulator | 2024-07-03 13:44:22,385 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} 13:47:15 simulator | 2024-07-03 13:44:22,386 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 13:47:15 simulator | 2024-07-03 13:44:22,387 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1830ms 13:47:15 simulator | 2024-07-03 13:44:22,387 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4906 ms. 13:47:15 simulator | 2024-07-03 13:44:22,388 INFO org.onap.policy.models.simulators starting VFC simulator 13:47:15 simulator | 2024-07-03 13:44:22,392 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 13:47:15 simulator | 2024-07-03 13:44:22,393 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:47:15 simulator | 2024-07-03 13:44:22,394 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:47:15 simulator | 2024-07-03 13:44:22,394 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 13:47:15 simulator | 2024-07-03 13:44:22,406 INFO Session workerName=node0 13:47:15 simulator | 2024-07-03 13:44:22,454 INFO Using GSON for REST calls 13:47:15 simulator | 2024-07-03 13:44:22,463 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} 13:47:15 simulator | 2024-07-03 13:44:22,464 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 13:47:15 simulator | 2024-07-03 13:44:22,465 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1908ms 13:47:15 simulator | 2024-07-03 13:44:22,465 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4929 ms. 13:47:15 simulator | 2024-07-03 13:44:22,465 INFO org.onap.policy.models.simulators started 13:47:15 =================================== 13:47:15 ======== Logs from zookeeper ======== 13:47:15 zookeeper | ===> User 13:47:15 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 13:47:15 zookeeper | ===> Configuring ... 13:47:15 zookeeper | ===> Running preflight checks ... 13:47:15 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 13:47:15 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 13:47:15 zookeeper | ===> Launching ... 13:47:15 zookeeper | ===> Launching zookeeper ... 13:47:15 zookeeper | [2024-07-03 13:44:25,436] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:47:15 zookeeper | [2024-07-03 13:44:25,442] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:47:15 zookeeper | [2024-07-03 13:44:25,443] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:47:15 zookeeper | [2024-07-03 13:44:25,443] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:47:15 zookeeper | [2024-07-03 13:44:25,443] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:47:15 zookeeper | [2024-07-03 13:44:25,444] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 13:47:15 zookeeper | [2024-07-03 13:44:25,444] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 13:47:15 zookeeper | [2024-07-03 13:44:25,444] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 13:47:15 zookeeper | [2024-07-03 13:44:25,444] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 13:47:15 zookeeper | [2024-07-03 13:44:25,445] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 13:47:15 zookeeper | [2024-07-03 13:44:25,445] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:47:15 zookeeper | [2024-07-03 13:44:25,446] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:47:15 zookeeper | [2024-07-03 13:44:25,446] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:47:15 zookeeper | [2024-07-03 13:44:25,446] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:47:15 zookeeper | [2024-07-03 13:44:25,446] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:47:15 zookeeper | [2024-07-03 13:44:25,446] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 13:47:15 zookeeper | [2024-07-03 13:44:25,456] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@77eca502 (org.apache.zookeeper.server.ServerMetrics) 13:47:15 zookeeper | [2024-07-03 13:44:25,459] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 13:47:15 zookeeper | [2024-07-03 13:44:25,459] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 13:47:15 zookeeper | [2024-07-03 13:44:25,461] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 13:47:15 zookeeper | [2024-07-03 13:44:25,470] INFO (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,470] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,470] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,470] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,470] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,470] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,470] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,470] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,470] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,470] INFO (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,472] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,473] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 13:47:15 zookeeper | [2024-07-03 13:44:25,474] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,474] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,475] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 13:47:15 zookeeper | [2024-07-03 13:44:25,475] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 13:47:15 zookeeper | [2024-07-03 13:44:25,475] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:47:15 zookeeper | [2024-07-03 13:44:25,476] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:47:15 zookeeper | [2024-07-03 13:44:25,476] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:47:15 zookeeper | [2024-07-03 13:44:25,476] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:47:15 zookeeper | [2024-07-03 13:44:25,476] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:47:15 zookeeper | [2024-07-03 13:44:25,476] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:47:15 zookeeper | [2024-07-03 13:44:25,478] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,478] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,478] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 13:47:15 zookeeper | [2024-07-03 13:44:25,478] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 13:47:15 zookeeper | [2024-07-03 13:44:25,478] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,497] INFO Logging initialized @536ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 13:47:15 zookeeper | [2024-07-03 13:44:25,580] WARN o.e.j.s.ServletContextHandler@6d5620ce{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 13:47:15 zookeeper | [2024-07-03 13:44:25,580] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 13:47:15 zookeeper | [2024-07-03 13:44:25,597] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) 13:47:15 zookeeper | [2024-07-03 13:44:25,627] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 13:47:15 zookeeper | [2024-07-03 13:44:25,627] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 13:47:15 zookeeper | [2024-07-03 13:44:25,629] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 13:47:15 zookeeper | [2024-07-03 13:44:25,631] WARN ServletContext@o.e.j.s.ServletContextHandler@6d5620ce{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 13:47:15 zookeeper | [2024-07-03 13:44:25,638] INFO Started o.e.j.s.ServletContextHandler@6d5620ce{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 13:47:15 zookeeper | [2024-07-03 13:44:25,651] INFO Started ServerConnector@4d1bf319{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 13:47:15 zookeeper | [2024-07-03 13:44:25,651] INFO Started @690ms (org.eclipse.jetty.server.Server) 13:47:15 zookeeper | [2024-07-03 13:44:25,651] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,655] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 13:47:15 zookeeper | [2024-07-03 13:44:25,656] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 13:47:15 zookeeper | [2024-07-03 13:44:25,657] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 13:47:15 zookeeper | [2024-07-03 13:44:25,659] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 13:47:15 zookeeper | [2024-07-03 13:44:25,673] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 13:47:15 zookeeper | [2024-07-03 13:44:25,673] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 13:47:15 zookeeper | [2024-07-03 13:44:25,674] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 13:47:15 zookeeper | [2024-07-03 13:44:25,674] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 13:47:15 zookeeper | [2024-07-03 13:44:25,678] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 13:47:15 zookeeper | [2024-07-03 13:44:25,678] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 13:47:15 zookeeper | [2024-07-03 13:44:25,681] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 13:47:15 zookeeper | [2024-07-03 13:44:25,682] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 13:47:15 zookeeper | [2024-07-03 13:44:25,682] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 13:47:15 zookeeper | [2024-07-03 13:44:25,691] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 13:47:15 zookeeper | [2024-07-03 13:44:25,692] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 13:47:15 zookeeper | [2024-07-03 13:44:25,703] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 13:47:15 zookeeper | [2024-07-03 13:44:25,704] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 13:47:15 zookeeper | [2024-07-03 13:44:30,888] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 13:47:15 =================================== 13:47:15 Tearing down containers... 13:47:15 Container policy-csit Stopping 13:47:15 Container grafana Stopping 13:47:15 Container policy-apex-pdp Stopping 13:47:15 Container policy-csit Stopped 13:47:15 Container policy-csit Removing 13:47:15 Container policy-csit Removed 13:47:15 Container grafana Stopped 13:47:15 Container grafana Removing 13:47:15 Container grafana Removed 13:47:15 Container prometheus Stopping 13:47:16 Container prometheus Stopped 13:47:16 Container prometheus Removing 13:47:16 Container prometheus Removed 13:47:25 Container policy-apex-pdp Stopped 13:47:25 Container policy-apex-pdp Removing 13:47:25 Container policy-apex-pdp Removed 13:47:25 Container simulator Stopping 13:47:25 Container policy-pap Stopping 13:47:36 Container simulator Stopped 13:47:36 Container simulator Removing 13:47:36 Container simulator Removed 13:47:36 Container policy-pap Stopped 13:47:36 Container policy-pap Removing 13:47:36 Container policy-pap Removed 13:47:36 Container policy-api Stopping 13:47:36 Container kafka Stopping 13:47:37 Container kafka Stopped 13:47:37 Container kafka Removing 13:47:37 Container kafka Removed 13:47:37 Container zookeeper Stopping 13:47:38 Container zookeeper Stopped 13:47:38 Container zookeeper Removing 13:47:38 Container zookeeper Removed 13:47:46 Container policy-api Stopped 13:47:46 Container policy-api Removing 13:47:46 Container policy-api Removed 13:47:46 Container policy-db-migrator Stopping 13:47:46 Container policy-db-migrator Stopped 13:47:46 Container policy-db-migrator Removing 13:47:46 Container policy-db-migrator Removed 13:47:46 Container mariadb Stopping 13:47:47 Container mariadb Stopped 13:47:47 Container mariadb Removing 13:47:47 Container mariadb Removed 13:47:47 Network compose_default Removing 13:47:47 Network compose_default Removed 13:47:47 $ ssh-agent -k 13:47:47 unset SSH_AUTH_SOCK; 13:47:47 unset SSH_AGENT_PID; 13:47:47 echo Agent pid 2076 killed; 13:47:47 [ssh-agent] Stopped. 13:47:47 Robot results publisher started... 13:47:47 INFO: Checking test criticality is deprecated and will be dropped in a future release! 13:47:47 -Parsing output xml: 13:47:48 Done! 13:47:48 -Copying log files to build dir: 13:47:48 Done! 13:47:48 -Assigning results to build: 13:47:48 Done! 13:47:48 -Checking thresholds: 13:47:48 Done! 13:47:48 Done publishing Robot results. 13:47:48 Build step 'Publish Robot Framework test results' changed build result to UNSTABLE 13:47:48 [PostBuildScript] - [INFO] Executing post build scripts. 13:47:48 [policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres] $ /bin/bash /tmp/jenkins5536114256878215873.sh 13:47:48 ---> sysstat.sh 13:47:48 [policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres] $ /bin/bash /tmp/jenkins2833437562009965858.sh 13:47:48 ---> package-listing.sh 13:47:48 ++ facter osfamily 13:47:48 ++ tr '[:upper:]' '[:lower:]' 13:47:49 + OS_FAMILY=debian 13:47:49 + workspace=/w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres 13:47:49 + START_PACKAGES=/tmp/packages_start.txt 13:47:49 + END_PACKAGES=/tmp/packages_end.txt 13:47:49 + DIFF_PACKAGES=/tmp/packages_diff.txt 13:47:49 + PACKAGES=/tmp/packages_start.txt 13:47:49 + '[' /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres ']' 13:47:49 + PACKAGES=/tmp/packages_end.txt 13:47:49 + case "${OS_FAMILY}" in 13:47:49 + dpkg -l 13:47:49 + grep '^ii' 13:47:49 + '[' -f /tmp/packages_start.txt ']' 13:47:49 + '[' -f /tmp/packages_end.txt ']' 13:47:49 + diff /tmp/packages_start.txt /tmp/packages_end.txt 13:47:49 + '[' /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres ']' 13:47:49 + mkdir -p /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres/archives/ 13:47:49 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres/archives/ 13:47:49 [policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres] $ /bin/bash /tmp/jenkins2775557909083486295.sh 13:47:49 ---> capture-instance-metadata.sh 13:47:49 Setup pyenv: 13:47:49 system 13:47:49 3.8.13 13:47:49 3.9.13 13:47:49 * 3.10.6 (set by /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres/.python-version) 13:47:49 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ZZmm from file:/tmp/.os_lf_venv 13:47:50 lf-activate-venv(): INFO: Installing: lftools 13:47:58 lf-activate-venv(): INFO: Adding /tmp/venv-ZZmm/bin to PATH 13:47:58 INFO: Running in OpenStack, capturing instance metadata 13:47:58 [policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres] $ /bin/bash /tmp/jenkins3446918795491282419.sh 13:47:58 provisioning config files... 13:47:58 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres@tmp/config2098791844742794878tmp 13:47:58 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 13:47:58 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 13:47:58 [EnvInject] - Injecting environment variables from a build step. 13:47:58 [EnvInject] - Injecting as environment variables the properties content 13:47:58 SERVER_ID=logs 13:47:58 13:47:58 [EnvInject] - Variables injected successfully. 13:47:58 [policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres] $ /bin/bash /tmp/jenkins10136094788918019786.sh 13:47:58 ---> create-netrc.sh 13:47:58 [policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres] $ /bin/bash /tmp/jenkins10261184839140306982.sh 13:47:58 ---> python-tools-install.sh 13:47:58 Setup pyenv: 13:47:58 system 13:47:58 3.8.13 13:47:58 3.9.13 13:47:58 * 3.10.6 (set by /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres/.python-version) 13:47:59 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ZZmm from file:/tmp/.os_lf_venv 13:47:59 lf-activate-venv(): INFO: Installing: lftools 13:48:07 lf-activate-venv(): INFO: Adding /tmp/venv-ZZmm/bin to PATH 13:48:07 [policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres] $ /bin/bash /tmp/jenkins13016385822282217991.sh 13:48:07 ---> sudo-logs.sh 13:48:07 Archiving 'sudo' log.. 13:48:07 [policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres] $ /bin/bash /tmp/jenkins9185639884688901051.sh 13:48:07 ---> job-cost.sh 13:48:07 Setup pyenv: 13:48:07 system 13:48:07 3.8.13 13:48:07 3.9.13 13:48:07 * 3.10.6 (set by /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres/.python-version) 13:48:07 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ZZmm from file:/tmp/.os_lf_venv 13:48:08 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 13:48:12 lf-activate-venv(): INFO: Adding /tmp/venv-ZZmm/bin to PATH 13:48:12 INFO: No Stack... 13:48:12 INFO: Retrieving Pricing Info for: v3-standard-8 13:48:13 INFO: Archiving Costs 13:48:13 [policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres] $ /bin/bash -l /tmp/jenkins12949106591489069234.sh 13:48:13 ---> logs-deploy.sh 13:48:13 Setup pyenv: 13:48:13 system 13:48:13 3.8.13 13:48:13 3.9.13 13:48:13 * 3.10.6 (set by /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres/.python-version) 13:48:13 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ZZmm from file:/tmp/.os_lf_venv 13:48:14 lf-activate-venv(): INFO: Installing: lftools 13:48:22 lf-activate-venv(): INFO: Adding /tmp/venv-ZZmm/bin to PATH 13:48:22 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-apex-pdp-master-project-csit-verify-apex-pdp-postgres/97 13:48:22 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 13:48:23 Archives upload complete. 13:48:23 INFO: archiving logs to Nexus 13:48:24 ---> uname -a: 13:48:24 Linux prd-ubuntu1804-docker-8c-8g-21073 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 13:48:24 13:48:24 13:48:24 ---> lscpu: 13:48:24 Architecture: x86_64 13:48:24 CPU op-mode(s): 32-bit, 64-bit 13:48:24 Byte Order: Little Endian 13:48:24 CPU(s): 8 13:48:24 On-line CPU(s) list: 0-7 13:48:24 Thread(s) per core: 1 13:48:24 Core(s) per socket: 1 13:48:24 Socket(s): 8 13:48:24 NUMA node(s): 1 13:48:24 Vendor ID: AuthenticAMD 13:48:24 CPU family: 23 13:48:24 Model: 49 13:48:24 Model name: AMD EPYC-Rome Processor 13:48:24 Stepping: 0 13:48:24 CPU MHz: 2800.000 13:48:24 BogoMIPS: 5600.00 13:48:24 Virtualization: AMD-V 13:48:24 Hypervisor vendor: KVM 13:48:24 Virtualization type: full 13:48:24 L1d cache: 32K 13:48:24 L1i cache: 32K 13:48:24 L2 cache: 512K 13:48:24 L3 cache: 16384K 13:48:24 NUMA node0 CPU(s): 0-7 13:48:24 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 13:48:24 13:48:24 13:48:24 ---> nproc: 13:48:24 8 13:48:24 13:48:24 13:48:24 ---> df -h: 13:48:24 Filesystem Size Used Avail Use% Mounted on 13:48:24 udev 16G 0 16G 0% /dev 13:48:24 tmpfs 3.2G 708K 3.2G 1% /run 13:48:24 /dev/vda1 155G 14G 141G 9% / 13:48:24 tmpfs 16G 0 16G 0% /dev/shm 13:48:24 tmpfs 5.0M 0 5.0M 0% /run/lock 13:48:24 tmpfs 16G 0 16G 0% /sys/fs/cgroup 13:48:24 /dev/vda15 105M 4.4M 100M 5% /boot/efi 13:48:24 tmpfs 3.2G 0 3.2G 0% /run/user/1001 13:48:24 13:48:24 13:48:24 ---> free -m: 13:48:24 total used free shared buff/cache available 13:48:24 Mem: 32167 875 24731 0 6559 30836 13:48:24 Swap: 1023 0 1023 13:48:24 13:48:24 13:48:24 ---> ip addr: 13:48:24 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 13:48:24 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 13:48:24 inet 127.0.0.1/8 scope host lo 13:48:24 valid_lft forever preferred_lft forever 13:48:24 inet6 ::1/128 scope host 13:48:24 valid_lft forever preferred_lft forever 13:48:24 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 13:48:24 link/ether fa:16:3e:95:8d:c6 brd ff:ff:ff:ff:ff:ff 13:48:24 inet 10.30.107.69/23 brd 10.30.107.255 scope global dynamic ens3 13:48:24 valid_lft 85921sec preferred_lft 85921sec 13:48:24 inet6 fe80::f816:3eff:fe95:8dc6/64 scope link 13:48:24 valid_lft forever preferred_lft forever 13:48:24 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 13:48:24 link/ether 02:42:75:49:dc:52 brd ff:ff:ff:ff:ff:ff 13:48:24 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 13:48:24 valid_lft forever preferred_lft forever 13:48:24 inet6 fe80::42:75ff:fe49:dc52/64 scope link 13:48:24 valid_lft forever preferred_lft forever 13:48:24 13:48:24 13:48:24 ---> sar -b -r -n DEV: 13:48:24 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21073) 07/03/24 _x86_64_ (8 CPU) 13:48:24 13:48:24 13:40:27 LINUX RESTART (8 CPU) 13:48:24 13:48:24 13:41:01 tps rtps wtps bread/s bwrtn/s 13:48:24 13:42:01 299.55 70.55 229.00 4563.05 28458.44 13:48:24 13:43:01 208.57 19.63 188.94 2302.15 53234.86 13:48:24 13:44:01 155.71 0.05 155.66 0.40 106483.85 13:48:24 13:45:01 323.25 12.63 310.61 780.50 55768.81 13:48:24 13:46:01 142.28 0.20 142.08 21.20 38032.03 13:48:24 13:47:01 23.68 0.22 23.46 15.86 21179.25 13:48:24 13:48:01 74.05 1.30 72.75 97.85 22486.77 13:48:24 Average: 175.30 14.94 160.36 1111.74 46519.71 13:48:24 13:48:24 13:41:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 13:48:24 13:42:01 30128864 31648932 2810356 8.53 59232 1777312 1545920 4.55 936664 1597784 131280 13:48:24 13:43:01 26093072 31636896 6846148 20.78 121364 5583284 1758280 5.17 1008684 5337800 2948612 13:48:24 13:44:01 25745576 31633868 7193644 21.84 127156 5900628 1895680 5.58 1027620 5654168 851136 13:48:24 13:45:01 23902796 29976832 9036424 27.43 143156 6053128 8385912 24.67 2874784 5578148 448 13:48:24 13:46:01 22869156 29600520 10070064 30.57 172308 6635264 8941376 26.31 3332572 6097728 317456 13:48:24 13:47:01 23077404 29471364 9861816 29.94 172616 6304412 9183244 27.02 3460140 5766696 196 13:48:24 13:48:01 25346332 31595072 7592888 23.05 173936 6174796 1668320 4.91 1386644 5638036 17568 13:48:24 Average: 25309029 30794783 7630191 23.16 138538 5489832 4768390 14.03 2003873 5095766 609528 13:48:24 13:48:24 13:41:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 13:48:24 13:42:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:48:24 13:42:01 ens3 363.15 237.98 1522.98 76.53 0.00 0.00 0.00 0.00 13:48:24 13:42:01 lo 1.00 1.00 0.11 0.11 0.00 0.00 0.00 0.00 13:48:24 13:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:48:24 13:43:01 ens3 1476.05 814.23 33313.03 69.09 0.00 0.00 0.00 0.00 13:48:24 13:43:01 lo 14.53 14.53 1.41 1.41 0.00 0.00 0.00 0.00 13:48:24 13:44:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:48:24 13:44:01 ens3 5.33 4.83 1.13 1.40 0.00 0.00 0.00 0.00 13:48:24 13:44:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:48:24 13:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:48:24 13:45:01 veth32a2fa6 2.48 2.48 0.20 0.20 0.00 0.00 0.00 0.00 13:48:24 13:45:01 br-78bed23df697 0.88 0.73 0.07 0.38 0.00 0.00 0.00 0.00 13:48:24 13:45:01 ens3 5.60 4.37 1.62 1.53 0.00 0.00 0.00 0.00 13:48:24 13:46:01 docker0 12.11 16.91 2.06 285.23 0.00 0.00 0.00 0.00 13:48:24 13:46:01 veth32a2fa6 12.40 14.03 2.91 2.57 0.00 0.00 0.00 0.00 13:48:24 13:46:01 br-78bed23df697 0.42 0.52 0.05 0.04 0.00 0.00 0.00 0.00 13:48:24 13:46:01 ens3 58.72 41.63 1182.63 6.05 0.00 0.00 0.00 0.00 13:48:24 13:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:48:24 13:47:01 veth32a2fa6 6.30 9.25 1.46 0.71 0.00 0.00 0.00 0.00 13:48:24 13:47:01 br-78bed23df697 0.23 0.10 0.01 0.01 0.00 0.00 0.00 0.00 13:48:24 13:47:01 veth26920d5 3.62 3.15 19.26 8.32 0.00 0.00 0.00 0.00 13:48:24 13:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:48:24 13:48:01 ens3 1974.64 1156.29 36161.19 193.40 0.00 0.00 0.00 0.00 13:48:24 13:48:01 lo 27.03 27.03 2.50 2.50 0.00 0.00 0.00 0.00 13:48:24 Average: docker0 1.73 2.42 0.29 40.75 0.00 0.00 0.00 0.00 13:48:24 Average: ens3 279.65 163.70 5155.57 27.43 0.00 0.00 0.00 0.00 13:48:24 Average: lo 3.32 3.32 0.31 0.31 0.00 0.00 0.00 0.00 13:48:24 13:48:24 13:48:24 ---> sar -P ALL: 13:48:24 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21073) 07/03/24 _x86_64_ (8 CPU) 13:48:24 13:48:24 13:40:27 LINUX RESTART (8 CPU) 13:48:24 13:48:24 13:41:01 CPU %user %nice %system %iowait %steal %idle 13:48:24 13:42:01 all 8.74 0.00 0.99 4.47 0.09 85.72 13:48:24 13:42:01 0 5.42 0.00 0.50 2.18 0.02 91.88 13:48:24 13:42:01 1 23.20 0.00 1.67 3.91 0.08 71.14 13:48:24 13:42:01 2 12.93 0.00 1.09 2.67 0.03 83.28 13:48:24 13:42:01 3 3.93 0.00 0.65 1.36 0.03 94.02 13:48:24 13:42:01 4 6.63 0.00 0.67 2.29 0.05 90.36 13:48:24 13:42:01 5 9.18 0.00 1.56 3.77 0.42 85.07 13:48:24 13:42:01 6 1.95 0.00 0.99 17.83 0.02 79.21 13:48:24 13:42:01 7 6.70 0.00 0.75 1.80 0.03 90.72 13:48:24 13:43:01 all 15.06 0.00 5.23 11.85 0.07 67.78 13:48:24 13:43:01 0 11.07 0.00 4.57 6.78 0.07 77.51 13:48:24 13:43:01 1 20.49 0.00 5.54 23.19 0.10 50.68 13:48:24 13:43:01 2 31.77 0.00 6.20 24.49 0.10 37.44 13:48:24 13:43:01 3 13.07 0.00 4.93 3.48 0.07 78.45 13:48:24 13:43:01 4 10.94 0.00 4.40 4.55 0.05 80.06 13:48:24 13:43:01 5 11.39 0.00 5.41 7.95 0.07 75.18 13:48:24 13:43:01 6 10.48 0.00 5.99 20.17 0.08 63.27 13:48:24 13:43:01 7 11.28 0.00 4.87 4.16 0.05 79.63 13:48:24 13:44:01 all 2.55 0.00 1.19 24.83 0.02 71.41 13:48:24 13:44:01 0 3.15 0.00 1.83 30.98 0.02 64.03 13:48:24 13:44:01 1 2.40 0.00 0.45 0.48 0.02 96.65 13:48:24 13:44:01 2 2.89 0.00 1.19 49.60 0.05 46.28 13:48:24 13:44:01 3 3.75 0.00 1.27 18.14 0.02 76.82 13:48:24 13:44:01 4 1.24 0.00 1.12 7.15 0.02 90.48 13:48:24 13:44:01 5 2.49 0.00 1.50 22.92 0.02 73.07 13:48:24 13:44:01 6 1.76 0.00 1.05 35.84 0.02 61.33 13:48:24 13:44:01 7 2.75 0.00 1.09 33.84 0.03 62.28 13:48:24 13:45:01 all 18.83 0.00 2.84 8.84 0.08 69.42 13:48:24 13:45:01 0 21.55 0.00 2.99 7.23 0.07 68.17 13:48:24 13:45:01 1 13.98 0.00 2.10 2.86 0.07 81.00 13:48:24 13:45:01 2 20.09 0.00 2.99 2.43 0.07 74.42 13:48:24 13:45:01 3 19.40 0.00 2.89 10.22 0.08 67.40 13:48:24 13:45:01 4 13.48 0.00 2.42 28.98 0.08 55.03 13:48:24 13:45:01 5 25.47 0.00 4.01 8.88 0.08 61.55 13:48:24 13:45:01 6 15.19 0.00 2.37 2.89 0.07 79.48 13:48:24 13:45:01 7 21.41 0.00 2.94 7.24 0.08 68.33 13:48:24 13:46:01 all 11.14 0.00 1.83 4.06 0.05 82.92 13:48:24 13:46:01 0 10.63 0.00 1.79 0.20 0.05 87.33 13:48:24 13:46:01 1 11.10 0.00 1.81 3.96 0.05 83.09 13:48:24 13:46:01 2 10.22 0.00 1.74 0.79 0.05 87.20 13:48:24 13:46:01 3 9.69 0.00 1.63 5.19 0.05 83.44 13:48:24 13:46:01 4 13.53 0.00 1.89 0.90 0.05 83.62 13:48:24 13:46:01 5 13.93 0.00 2.01 1.09 0.07 82.89 13:48:24 13:46:01 6 10.31 0.00 1.84 12.05 0.07 75.74 13:48:24 13:46:01 7 9.67 0.00 1.96 8.30 0.05 80.01 13:48:24 13:47:01 all 3.12 0.00 0.45 1.90 0.04 94.50 13:48:24 13:47:01 0 2.84 0.00 0.35 0.20 0.03 96.58 13:48:24 13:47:01 1 2.24 0.00 0.35 0.15 0.05 97.21 13:48:24 13:47:01 2 3.97 0.00 0.47 0.00 0.03 95.53 13:48:24 13:47:01 3 3.49 0.00 0.47 0.07 0.03 95.94 13:48:24 13:47:01 4 4.79 0.00 0.43 0.00 0.02 94.76 13:48:24 13:47:01 5 2.35 0.00 0.63 0.07 0.03 96.91 13:48:24 13:47:01 6 2.01 0.00 0.34 0.20 0.03 97.42 13:48:24 13:47:01 7 3.21 0.00 0.50 14.51 0.03 81.75 13:48:24 13:48:01 all 4.21 0.00 0.82 2.20 0.04 92.73 13:48:24 13:48:01 0 3.32 0.00 0.92 0.10 0.03 95.63 13:48:24 13:48:01 1 2.26 0.00 0.74 0.22 0.05 96.74 13:48:24 13:48:01 2 2.61 0.00 0.80 0.22 0.05 96.32 13:48:24 13:48:01 3 1.92 0.00 0.70 0.45 0.03 96.89 13:48:24 13:48:01 4 1.45 0.00 0.87 0.18 0.05 97.45 13:48:24 13:48:01 5 2.54 0.00 0.74 12.22 0.05 84.45 13:48:24 13:48:01 6 3.47 0.00 0.77 0.15 0.03 95.57 13:48:24 13:48:01 7 16.15 0.00 1.05 4.08 0.05 78.66 13:48:24 Average: all 9.08 0.00 1.90 8.30 0.06 80.66 13:48:24 Average: 0 8.27 0.00 1.84 6.80 0.04 83.05 13:48:24 Average: 1 10.79 0.00 1.80 4.95 0.06 82.39 13:48:24 Average: 2 12.04 0.00 2.06 11.43 0.06 74.42 13:48:24 Average: 3 7.88 0.00 1.79 5.55 0.05 84.74 13:48:24 Average: 4 7.43 0.00 1.68 6.28 0.05 84.57 13:48:24 Average: 5 9.61 0.00 2.26 8.13 0.11 79.89 13:48:24 Average: 6 6.45 0.00 1.90 12.74 0.05 78.87 13:48:24 Average: 7 10.16 0.00 1.88 10.56 0.05 77.36 13:48:24 13:48:24 13:48:24