14:24:21 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/138377 14:24:21 Running as SYSTEM 14:24:21 [EnvInject] - Loading node environment variables. 14:24:21 Building remotely on prd-ubuntu1804-docker-8c-8g-21085 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp 14:24:21 [ssh-agent] Looking for ssh-agent implementation... 14:24:21 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 14:24:21 $ ssh-agent 14:24:21 SSH_AUTH_SOCK=/tmp/ssh-9VU7m84Q15T5/agent.2295 14:24:21 SSH_AGENT_PID=2297 14:24:21 [ssh-agent] Started. 14:24:21 Running ssh-add (command line suppressed) 14:24:21 Identity added: /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp@tmp/private_key_17366572002094807957.key (/w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp@tmp/private_key_17366572002094807957.key) 14:24:21 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 14:24:21 The recommended git tool is: NONE 14:24:26 using credential onap-jenkins-ssh 14:24:27 Wiping out workspace first. 14:24:27 Cloning the remote Git repository 14:24:27 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 14:24:27 > git init /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp # timeout=10 14:24:27 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 14:24:27 > git --version # timeout=10 14:24:27 > git --version # 'git version 2.17.1' 14:24:27 using GIT_SSH to set credentials Gerrit user 14:24:27 Verifying host key using manually-configured host key entries 14:24:27 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 14:24:27 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 14:24:27 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 14:24:28 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 14:24:28 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 14:24:28 using GIT_SSH to set credentials Gerrit user 14:24:28 Verifying host key using manually-configured host key entries 14:24:28 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/77/138377/1 # timeout=30 14:24:28 > git rev-parse da7302b230b9a765fb93a32b2c9bae9c3f025fb7^{commit} # timeout=10 14:24:28 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 14:24:28 Checking out Revision da7302b230b9a765fb93a32b2c9bae9c3f025fb7 (refs/changes/77/138377/1) 14:24:28 > git config core.sparsecheckout # timeout=10 14:24:28 > git checkout -f da7302b230b9a765fb93a32b2c9bae9c3f025fb7 # timeout=30 14:24:31 Commit message: "Setting jaeger version for CSITs" 14:24:31 > git rev-parse FETCH_HEAD^{commit} # timeout=10 14:24:31 > git rev-list --no-walk 54d234de0d9260f610425cd496a52265a4082441 # timeout=10 14:24:31 provisioning config files... 14:24:31 copy managed file [npmrc] to file:/home/jenkins/.npmrc 14:24:31 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 14:24:31 [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins15978912552415859845.sh 14:24:31 ---> python-tools-install.sh 14:24:31 Setup pyenv: 14:24:31 * system (set by /opt/pyenv/version) 14:24:31 * 3.8.13 (set by /opt/pyenv/version) 14:24:32 * 3.9.13 (set by /opt/pyenv/version) 14:24:32 * 3.10.6 (set by /opt/pyenv/version) 14:24:36 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-FKTr 14:24:36 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 14:24:40 lf-activate-venv(): INFO: Installing: lftools 14:25:07 lf-activate-venv(): INFO: Adding /tmp/venv-FKTr/bin to PATH 14:25:07 Generating Requirements File 14:25:27 Python 3.10.6 14:25:27 pip 24.1.1 from /tmp/venv-FKTr/lib/python3.10/site-packages/pip (python 3.10) 14:25:27 appdirs==1.4.4 14:25:27 argcomplete==3.4.0 14:25:27 aspy.yaml==1.3.0 14:25:27 attrs==23.2.0 14:25:27 autopage==0.5.2 14:25:27 beautifulsoup4==4.12.3 14:25:27 boto3==1.34.138 14:25:27 botocore==1.34.138 14:25:27 bs4==0.0.2 14:25:27 cachetools==5.3.3 14:25:27 certifi==2024.6.2 14:25:27 cffi==1.16.0 14:25:27 cfgv==3.4.0 14:25:27 chardet==5.2.0 14:25:27 charset-normalizer==3.3.2 14:25:27 click==8.1.7 14:25:27 cliff==4.7.0 14:25:27 cmd2==2.4.3 14:25:27 cryptography==3.3.2 14:25:27 debtcollector==3.0.0 14:25:27 decorator==5.1.1 14:25:27 defusedxml==0.7.1 14:25:27 Deprecated==1.2.14 14:25:27 distlib==0.3.8 14:25:27 dnspython==2.6.1 14:25:27 docker==4.2.2 14:25:27 dogpile.cache==1.3.3 14:25:27 email_validator==2.2.0 14:25:27 filelock==3.15.4 14:25:27 future==1.0.0 14:25:27 gitdb==4.0.11 14:25:27 GitPython==3.1.43 14:25:27 google-auth==2.31.0 14:25:27 httplib2==0.22.0 14:25:27 identify==2.5.36 14:25:27 idna==3.7 14:25:27 importlib-resources==1.5.0 14:25:27 iso8601==2.1.0 14:25:27 Jinja2==3.1.4 14:25:27 jmespath==1.0.1 14:25:27 jsonpatch==1.33 14:25:27 jsonpointer==3.0.0 14:25:27 jsonschema==4.22.0 14:25:27 jsonschema-specifications==2023.12.1 14:25:27 keystoneauth1==5.6.0 14:25:27 kubernetes==30.1.0 14:25:27 lftools==0.37.10 14:25:27 lxml==5.2.2 14:25:27 MarkupSafe==2.1.5 14:25:27 msgpack==1.0.8 14:25:27 multi_key_dict==2.0.3 14:25:27 munch==4.0.0 14:25:27 netaddr==1.3.0 14:25:27 netifaces==0.11.0 14:25:27 niet==1.4.2 14:25:27 nodeenv==1.9.1 14:25:27 oauth2client==4.1.3 14:25:27 oauthlib==3.2.2 14:25:27 openstacksdk==3.2.0 14:25:27 os-client-config==2.1.0 14:25:27 os-service-types==1.7.0 14:25:27 osc-lib==3.0.1 14:25:27 oslo.config==9.4.0 14:25:27 oslo.context==5.5.0 14:25:27 oslo.i18n==6.3.0 14:25:27 oslo.log==6.0.0 14:25:27 oslo.serialization==5.4.0 14:25:27 oslo.utils==7.1.0 14:25:27 packaging==24.1 14:25:27 pbr==6.0.0 14:25:27 platformdirs==4.2.2 14:25:27 prettytable==3.10.0 14:25:27 pyasn1==0.6.0 14:25:27 pyasn1_modules==0.4.0 14:25:27 pycparser==2.22 14:25:27 pygerrit2==2.0.15 14:25:27 PyGithub==2.3.0 14:25:27 PyJWT==2.8.0 14:25:27 PyNaCl==1.5.0 14:25:27 pyparsing==2.4.7 14:25:27 pyperclip==1.9.0 14:25:27 pyrsistent==0.20.0 14:25:27 python-cinderclient==9.5.0 14:25:27 python-dateutil==2.9.0.post0 14:25:27 python-heatclient==3.5.0 14:25:27 python-jenkins==1.8.2 14:25:27 python-keystoneclient==5.4.0 14:25:27 python-magnumclient==4.5.0 14:25:27 python-novaclient==18.6.0 14:25:27 python-openstackclient==6.6.0 14:25:27 python-swiftclient==4.6.0 14:25:27 PyYAML==6.0.1 14:25:27 referencing==0.35.1 14:25:27 requests==2.32.3 14:25:27 requests-oauthlib==2.0.0 14:25:27 requestsexceptions==1.4.0 14:25:27 rfc3986==2.0.0 14:25:27 rpds-py==0.18.1 14:25:27 rsa==4.9 14:25:27 ruamel.yaml==0.18.6 14:25:27 ruamel.yaml.clib==0.2.8 14:25:27 s3transfer==0.10.2 14:25:27 simplejson==3.19.2 14:25:27 six==1.16.0 14:25:27 smmap==5.0.1 14:25:27 soupsieve==2.5 14:25:27 stevedore==5.2.0 14:25:27 tabulate==0.9.0 14:25:27 toml==0.10.2 14:25:27 tomlkit==0.12.5 14:25:27 tqdm==4.66.4 14:25:27 typing_extensions==4.12.2 14:25:27 tzdata==2024.1 14:25:27 urllib3==1.26.19 14:25:27 virtualenv==20.26.3 14:25:27 wcwidth==0.2.13 14:25:27 websocket-client==1.8.0 14:25:27 wrapt==1.16.0 14:25:27 xdg==6.0.0 14:25:27 xmltodict==0.13.0 14:25:27 yq==3.4.3 14:25:27 [EnvInject] - Injecting environment variables from a build step. 14:25:27 [EnvInject] - Injecting as environment variables the properties content 14:25:27 SET_JDK_VERSION=openjdk17 14:25:27 GIT_URL="git://cloud.onap.org/mirror" 14:25:27 14:25:27 [EnvInject] - Variables injected successfully. 14:25:27 [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/sh /tmp/jenkins10764446976498124277.sh 14:25:27 ---> update-java-alternatives.sh 14:25:27 ---> Updating Java version 14:25:27 ---> Ubuntu/Debian system detected 14:25:27 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 14:25:27 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 14:25:27 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 14:25:28 openjdk version "17.0.4" 2022-07-19 14:25:28 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 14:25:28 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 14:25:28 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 14:25:28 [EnvInject] - Injecting environment variables from a build step. 14:25:28 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 14:25:28 [EnvInject] - Variables injected successfully. 14:25:28 [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/sh -xe /tmp/jenkins3535992191915203227.sh 14:25:28 + /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp/csit/run-project-csit.sh apex-pdp 14:25:28 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 14:25:28 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 14:25:28 Configure a credential helper to remove this warning. See 14:25:28 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 14:25:28 14:25:28 Login Succeeded 14:25:28 docker: 'compose' is not a docker command. 14:25:28 See 'docker --help' 14:25:28 Docker Compose Plugin not installed. Installing now... 14:25:28 % Total % Received % Xferd Average Speed Time Time Time Current 14:25:28 Dload Upload Total Spent Left Speed 14:25:28 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 14:25:29 100 60.0M 100 60.0M 0 0 149M 0 --:--:-- --:--:-- --:--:-- 149M 14:25:29 Setting project configuration for: apex-pdp 14:25:29 Configuring docker compose... 14:25:30 Starting apex-pdp application with Grafana 14:25:30 time="2024-07-03T14:25:30Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." 14:25:31 kafka Pulling 14:25:31 prometheus Pulling 14:25:31 mariadb Pulling 14:25:31 policy-db-migrator Pulling 14:25:31 pap Pulling 14:25:31 zookeeper Pulling 14:25:31 api Pulling 14:25:31 grafana Pulling 14:25:31 apex-pdp Pulling 14:25:31 simulator Pulling 14:25:31 31e352740f53 Pulling fs layer 14:25:31 257d54e26411 Pulling fs layer 14:25:31 215302b53935 Pulling fs layer 14:25:31 eb2f448c7730 Pulling fs layer 14:25:31 c8ee90c58894 Pulling fs layer 14:25:31 e30cdb86c4f0 Pulling fs layer 14:25:31 c990b7e46fc8 Pulling fs layer 14:25:31 eb2f448c7730 Waiting 14:25:31 c8ee90c58894 Waiting 14:25:31 e30cdb86c4f0 Waiting 14:25:31 c990b7e46fc8 Waiting 14:25:31 31e352740f53 Pulling fs layer 14:25:31 57703e441b07 Pulling fs layer 14:25:31 7138254c3790 Pulling fs layer 14:25:31 78f39bed0e83 Pulling fs layer 14:25:31 40796999d308 Pulling fs layer 14:25:31 14ddc757aae0 Pulling fs layer 14:25:31 ebe1cd824584 Pulling fs layer 14:25:31 d2893dc6732f Pulling fs layer 14:25:31 a23a963fcebe Pulling fs layer 14:25:31 369dfa39565e Pulling fs layer 14:25:31 9146eb587aa8 Pulling fs layer 14:25:31 a120f6888c1f Pulling fs layer 14:25:31 57703e441b07 Waiting 14:25:31 7138254c3790 Waiting 14:25:31 78f39bed0e83 Waiting 14:25:31 40796999d308 Waiting 14:25:31 14ddc757aae0 Waiting 14:25:31 ebe1cd824584 Waiting 14:25:31 d2893dc6732f Waiting 14:25:31 a23a963fcebe Waiting 14:25:31 369dfa39565e Waiting 14:25:31 9146eb587aa8 Waiting 14:25:31 a120f6888c1f Waiting 14:25:31 31e352740f53 Pulling fs layer 14:25:31 21c7cf7066d0 Pulling fs layer 14:25:31 c3cc5e3d19ac Pulling fs layer 14:25:31 0d2280d71230 Pulling fs layer 14:25:31 984932e12fb0 Pulling fs layer 14:25:31 5687ac571232 Pulling fs layer 14:25:31 deac262509a5 Pulling fs layer 14:25:31 c3cc5e3d19ac Waiting 14:25:31 0d2280d71230 Waiting 14:25:31 984932e12fb0 Waiting 14:25:31 5687ac571232 Waiting 14:25:31 deac262509a5 Waiting 14:25:31 21c7cf7066d0 Waiting 14:25:31 31e352740f53 Pulling fs layer 14:25:31 21c7cf7066d0 Pulling fs layer 14:25:31 eb5e31f0ecf8 Pulling fs layer 14:25:31 4faab25371b2 Pulling fs layer 14:25:31 6b867d96d427 Pulling fs layer 14:25:31 93832cc54357 Pulling fs layer 14:25:31 4faab25371b2 Waiting 14:25:31 6b867d96d427 Waiting 14:25:31 93832cc54357 Waiting 14:25:31 21c7cf7066d0 Waiting 14:25:31 eb5e31f0ecf8 Waiting 14:25:31 31e352740f53 Downloading [> ] 48.06kB/3.398MB 14:25:31 31e352740f53 Downloading [> ] 48.06kB/3.398MB 14:25:31 31e352740f53 Downloading [> ] 48.06kB/3.398MB 14:25:31 31e352740f53 Downloading [> ] 48.06kB/3.398MB 14:25:31 215302b53935 Downloading [==================================================>] 293B/293B 14:25:31 215302b53935 Verifying Checksum 14:25:31 215302b53935 Download complete 14:25:31 31e352740f53 Pulling fs layer 14:25:31 e8bf24a82546 Pulling fs layer 14:25:31 154b803e2d93 Pulling fs layer 14:25:31 e8bf24a82546 Waiting 14:25:31 31e352740f53 Downloading [> ] 48.06kB/3.398MB 14:25:31 e4305231c991 Pulling fs layer 14:25:31 f469048fbe8d Pulling fs layer 14:25:31 c189e028fabb Pulling fs layer 14:25:31 e4305231c991 Waiting 14:25:31 c9bd119720e4 Pulling fs layer 14:25:31 c189e028fabb Waiting 14:25:31 c9bd119720e4 Waiting 14:25:31 f469048fbe8d Waiting 14:25:31 257d54e26411 Downloading [> ] 539.6kB/73.93MB 14:25:31 9fa9226be034 Pulling fs layer 14:25:31 1617e25568b2 Pulling fs layer 14:25:31 3ecda1bfd07b Pulling fs layer 14:25:31 ac9f4de4b762 Pulling fs layer 14:25:31 ea63b2e6315f Pulling fs layer 14:25:31 fbd390d3bd00 Pulling fs layer 14:25:31 9b1ac15ef728 Pulling fs layer 14:25:31 8682f304eb80 Pulling fs layer 14:25:31 5fbafe078afc Pulling fs layer 14:25:31 9fa9226be034 Waiting 14:25:31 ac9f4de4b762 Waiting 14:25:31 1617e25568b2 Waiting 14:25:31 3ecda1bfd07b Waiting 14:25:31 ea63b2e6315f Waiting 14:25:31 fbd390d3bd00 Waiting 14:25:31 8682f304eb80 Waiting 14:25:31 7fb53fd2ae10 Pulling fs layer 14:25:31 5fbafe078afc Waiting 14:25:31 592798bd3683 Pulling fs layer 14:25:31 7fb53fd2ae10 Waiting 14:25:31 473fdc983780 Pulling fs layer 14:25:31 473fdc983780 Waiting 14:25:31 592798bd3683 Waiting 14:25:31 10ac4908093d Pulling fs layer 14:25:31 44779101e748 Pulling fs layer 14:25:31 a721db3e3f3d Pulling fs layer 14:25:31 1850a929b84a Pulling fs layer 14:25:31 397a918c7da3 Pulling fs layer 14:25:31 806be17e856d Pulling fs layer 14:25:31 634de6c90876 Pulling fs layer 14:25:31 44779101e748 Waiting 14:25:31 a721db3e3f3d Waiting 14:25:31 1850a929b84a Waiting 14:25:31 397a918c7da3 Waiting 14:25:31 cd00854cfb1a Pulling fs layer 14:25:31 806be17e856d Waiting 14:25:31 634de6c90876 Waiting 14:25:31 10ac4908093d Waiting 14:25:31 cd00854cfb1a Waiting 14:25:31 eb2f448c7730 Downloading [=> ] 3.001kB/127kB 14:25:31 eb2f448c7730 Download complete 14:25:31 4abcf2066143 Pulling fs layer 14:25:31 c0e05c86127e Pulling fs layer 14:25:31 706651a94df6 Pulling fs layer 14:25:31 33e0a01314cc Pulling fs layer 14:25:31 f8b444c6ff40 Pulling fs layer 14:25:31 e6c38e6d3add Pulling fs layer 14:25:31 6ca01427385e Pulling fs layer 14:25:31 e35e8e85e24d Pulling fs layer 14:25:31 4abcf2066143 Waiting 14:25:31 d0bef95bc6b2 Pulling fs layer 14:25:31 af860903a445 Pulling fs layer 14:25:31 f8b444c6ff40 Waiting 14:25:31 33e0a01314cc Waiting 14:25:31 e6c38e6d3add Waiting 14:25:31 6ca01427385e Waiting 14:25:31 d0bef95bc6b2 Waiting 14:25:31 e35e8e85e24d Waiting 14:25:31 af860903a445 Waiting 14:25:31 c0e05c86127e Waiting 14:25:31 706651a94df6 Waiting 14:25:31 c8ee90c58894 Downloading [==================================================>] 1.329kB/1.329kB 14:25:31 c8ee90c58894 Download complete 14:25:31 31e352740f53 Verifying Checksum 14:25:31 31e352740f53 Verifying Checksum 14:25:31 31e352740f53 Download complete 14:25:31 31e352740f53 Download complete 14:25:31 31e352740f53 Download complete 14:25:31 31e352740f53 Verifying Checksum 14:25:31 31e352740f53 Download complete 14:25:31 31e352740f53 Download complete 14:25:31 31e352740f53 Extracting [> ] 65.54kB/3.398MB 14:25:31 31e352740f53 Extracting [> ] 65.54kB/3.398MB 14:25:31 31e352740f53 Extracting [> ] 65.54kB/3.398MB 14:25:31 31e352740f53 Extracting [> ] 65.54kB/3.398MB 14:25:31 31e352740f53 Extracting [> ] 65.54kB/3.398MB 14:25:31 c990b7e46fc8 Downloading [==================================================>] 1.299kB/1.299kB 14:25:31 c990b7e46fc8 Verifying Checksum 14:25:31 c990b7e46fc8 Download complete 14:25:31 e30cdb86c4f0 Downloading [> ] 539.6kB/98.32MB 14:25:31 57703e441b07 Downloading [> ] 539.6kB/73.93MB 14:25:31 257d54e26411 Downloading [====> ] 6.487MB/73.93MB 14:25:31 31e352740f53 Extracting [===============> ] 1.049MB/3.398MB 14:25:31 31e352740f53 Extracting [===============> ] 1.049MB/3.398MB 14:25:31 31e352740f53 Extracting [===============> ] 1.049MB/3.398MB 14:25:31 31e352740f53 Extracting [===============> ] 1.049MB/3.398MB 14:25:31 31e352740f53 Extracting [===============> ] 1.049MB/3.398MB 14:25:31 e30cdb86c4f0 Downloading [===> ] 6.487MB/98.32MB 14:25:31 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 14:25:31 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 14:25:31 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 14:25:31 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 14:25:31 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 14:25:31 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 14:25:31 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 14:25:31 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 14:25:31 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 14:25:31 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 14:25:31 57703e441b07 Downloading [====> ] 6.487MB/73.93MB 14:25:31 257d54e26411 Downloading [========> ] 12.98MB/73.93MB 14:25:32 31e352740f53 Pull complete 14:25:32 31e352740f53 Pull complete 14:25:32 31e352740f53 Pull complete 14:25:32 31e352740f53 Pull complete 14:25:32 31e352740f53 Pull complete 14:25:34 57703e441b07 Downloading [=====> ] 7.568MB/73.93MB 14:25:34 e30cdb86c4f0 Downloading [====> ] 9.19MB/98.32MB 14:25:34 257d54e26411 Downloading [=========> ] 13.52MB/73.93MB 14:25:34 57703e441b07 Downloading [==========> ] 15.68MB/73.93MB 14:25:34 e30cdb86c4f0 Downloading [=========> ] 18.38MB/98.32MB 14:25:34 257d54e26411 Downloading [===============> ] 22.71MB/73.93MB 14:25:34 22ebf0e44c85 Pulling fs layer 14:25:34 00b33c871d26 Pulling fs layer 14:25:34 6b11e56702ad Pulling fs layer 14:25:34 53d69aa7d3fc Pulling fs layer 14:25:34 a3ab11953ef9 Pulling fs layer 14:25:34 91ef9543149d Pulling fs layer 14:25:34 2ec4f59af178 Pulling fs layer 14:25:34 8b7e81cd5ef1 Pulling fs layer 14:25:34 c52916c1316e Pulling fs layer 14:25:34 7a1cb9ad7f75 Pulling fs layer 14:25:34 6b11e56702ad Waiting 14:25:34 0a92c7dea7af Pulling fs layer 14:25:34 22ebf0e44c85 Waiting 14:25:34 53d69aa7d3fc Waiting 14:25:34 8b7e81cd5ef1 Waiting 14:25:34 91ef9543149d Waiting 14:25:34 a3ab11953ef9 Waiting 14:25:34 2ec4f59af178 Waiting 14:25:34 c52916c1316e Waiting 14:25:35 00b33c871d26 Waiting 14:25:35 7a1cb9ad7f75 Waiting 14:25:35 57703e441b07 Downloading [=============> ] 20MB/73.93MB 14:25:35 e30cdb86c4f0 Downloading [===========> ] 23.25MB/98.32MB 14:25:35 257d54e26411 Downloading [=================> ] 25.41MB/73.93MB 14:25:38 e30cdb86c4f0 Downloading [==============> ] 29.2MB/98.32MB 14:25:38 57703e441b07 Downloading [=================> ] 25.95MB/73.93MB 14:25:38 257d54e26411 Downloading [===================> ] 28.11MB/73.93MB 14:25:38 257d54e26411 Downloading [============================> ] 41.63MB/73.93MB 14:25:38 57703e441b07 Downloading [====================> ] 30.28MB/73.93MB 14:25:38 e30cdb86c4f0 Downloading [=================> ] 34.06MB/98.32MB 14:25:38 257d54e26411 Downloading [=======================================> ] 57.85MB/73.93MB 14:25:38 57703e441b07 Downloading [========================> ] 36.76MB/73.93MB 14:25:38 e30cdb86c4f0 Downloading [====================> ] 41.09MB/98.32MB 14:25:38 22ebf0e44c85 Pulling fs layer 14:25:38 00b33c871d26 Pulling fs layer 14:25:38 6b11e56702ad Pulling fs layer 14:25:38 53d69aa7d3fc Pulling fs layer 14:25:38 a3ab11953ef9 Pulling fs layer 14:25:38 22ebf0e44c85 Waiting 14:25:38 6b11e56702ad Waiting 14:25:38 53d69aa7d3fc Waiting 14:25:38 00b33c871d26 Waiting 14:25:38 a3ab11953ef9 Waiting 14:25:38 91ef9543149d Pulling fs layer 14:25:38 2ec4f59af178 Pulling fs layer 14:25:38 8b7e81cd5ef1 Pulling fs layer 14:25:38 c52916c1316e Pulling fs layer 14:25:38 d93f69e96600 Pulling fs layer 14:25:38 8b7e81cd5ef1 Waiting 14:25:38 91ef9543149d Waiting 14:25:38 c52916c1316e Waiting 14:25:38 bbb9d15c45a1 Pulling fs layer 14:25:38 2ec4f59af178 Waiting 14:25:38 d93f69e96600 Waiting 14:25:38 bbb9d15c45a1 Waiting 14:25:39 257d54e26411 Download complete 14:25:39 57703e441b07 Downloading [=============================> ] 44.33MB/73.93MB 14:25:39 e30cdb86c4f0 Downloading [========================> ] 48.12MB/98.32MB 14:25:39 7138254c3790 Downloading [> ] 343kB/32.98MB 14:25:39 57703e441b07 Downloading [=====================================> ] 55.15MB/73.93MB 14:25:39 e30cdb86c4f0 Downloading [=============================> ] 58.39MB/98.32MB 14:25:39 257d54e26411 Extracting [> ] 557.1kB/73.93MB 14:25:39 7138254c3790 Downloading [==========> ] 6.88MB/32.98MB 14:25:39 57703e441b07 Downloading [===============================================> ] 69.75MB/73.93MB 14:25:39 e30cdb86c4f0 Downloading [====================================> ] 70.83MB/98.32MB 14:25:39 257d54e26411 Extracting [===> ] 4.456MB/73.93MB 14:25:39 57703e441b07 Verifying Checksum 14:25:39 57703e441b07 Download complete 14:25:39 7138254c3790 Downloading [===================> ] 13.07MB/32.98MB 14:25:39 78f39bed0e83 Downloading [==================================================>] 1.077kB/1.077kB 14:25:39 78f39bed0e83 Verifying Checksum 14:25:39 78f39bed0e83 Download complete 14:25:39 40796999d308 Downloading [============================> ] 3.003kB/5.325kB 14:25:39 40796999d308 Downloading [==================================================>] 5.325kB/5.325kB 14:25:39 40796999d308 Verifying Checksum 14:25:39 14ddc757aae0 Downloading [============================> ] 3.003kB/5.314kB 14:25:39 14ddc757aae0 Downloading [==================================================>] 5.314kB/5.314kB 14:25:39 14ddc757aae0 Download complete 14:25:39 e30cdb86c4f0 Downloading [==========================================> ] 82.72MB/98.32MB 14:25:39 ebe1cd824584 Downloading [==================================================>] 1.037kB/1.037kB 14:25:39 ebe1cd824584 Download complete 14:25:39 d2893dc6732f Downloading [==================================================>] 1.038kB/1.038kB 14:25:39 d2893dc6732f Verifying Checksum 14:25:39 d2893dc6732f Download complete 14:25:39 257d54e26411 Extracting [======> ] 8.913MB/73.93MB 14:25:39 57703e441b07 Extracting [> ] 557.1kB/73.93MB 14:25:39 7138254c3790 Downloading [==============================> ] 20.3MB/32.98MB 14:25:39 a23a963fcebe Downloading [==========> ] 3.002kB/13.9kB 14:25:39 a23a963fcebe Download complete 14:25:39 369dfa39565e Downloading [==========> ] 3.002kB/13.79kB 14:25:39 369dfa39565e Downloading [==================================================>] 13.79kB/13.79kB 14:25:39 e30cdb86c4f0 Downloading [================================================> ] 96.24MB/98.32MB 14:25:39 369dfa39565e Verifying Checksum 14:25:39 369dfa39565e Download complete 14:25:39 e30cdb86c4f0 Verifying Checksum 14:25:39 e30cdb86c4f0 Download complete 14:25:39 9146eb587aa8 Downloading [==================================================>] 2.856kB/2.856kB 14:25:39 9146eb587aa8 Verifying Checksum 14:25:39 9146eb587aa8 Download complete 14:25:39 a120f6888c1f Downloading [==================================================>] 2.864kB/2.864kB 14:25:39 a120f6888c1f Verifying Checksum 14:25:39 a120f6888c1f Download complete 14:25:39 257d54e26411 Extracting [=========> ] 13.37MB/73.93MB 14:25:39 57703e441b07 Extracting [==> ] 3.899MB/73.93MB 14:25:39 7138254c3790 Downloading [===========================================> ] 28.56MB/32.98MB 14:25:39 c3cc5e3d19ac Downloading [==================================================>] 296B/296B 14:25:39 c3cc5e3d19ac Verifying Checksum 14:25:39 c3cc5e3d19ac Download complete 14:25:39 21c7cf7066d0 Downloading [> ] 539.6kB/73.93MB 14:25:39 21c7cf7066d0 Downloading [> ] 539.6kB/73.93MB 14:25:39 0d2280d71230 Downloading [=> ] 3.001kB/127.4kB 14:25:39 0d2280d71230 Downloading [==================================================>] 127.4kB/127.4kB 14:25:39 0d2280d71230 Download complete 14:25:39 7138254c3790 Verifying Checksum 14:25:39 7138254c3790 Download complete 14:25:39 984932e12fb0 Downloading [==================================================>] 1.147kB/1.147kB 14:25:39 984932e12fb0 Download complete 14:25:39 deac262509a5 Downloading [==================================================>] 1.118kB/1.118kB 14:25:39 deac262509a5 Verifying Checksum 14:25:39 deac262509a5 Download complete 14:25:39 eb5e31f0ecf8 Downloading [==================================================>] 305B/305B 14:25:39 eb5e31f0ecf8 Verifying Checksum 14:25:39 eb5e31f0ecf8 Download complete 14:25:39 5687ac571232 Downloading [> ] 539.6kB/91.54MB 14:25:39 257d54e26411 Extracting [===========> ] 17.27MB/73.93MB 14:25:39 57703e441b07 Extracting [====> ] 7.242MB/73.93MB 14:25:39 21c7cf7066d0 Downloading [=====> ] 7.568MB/73.93MB 14:25:39 21c7cf7066d0 Downloading [=====> ] 7.568MB/73.93MB 14:25:39 4faab25371b2 Downloading [> ] 539.6kB/158.6MB 14:25:39 257d54e26411 Extracting [===============> ] 22.84MB/73.93MB 14:25:39 5687ac571232 Downloading [===> ] 5.946MB/91.54MB 14:25:39 57703e441b07 Extracting [=======> ] 11.14MB/73.93MB 14:25:39 21c7cf7066d0 Downloading [========> ] 12.98MB/73.93MB 14:25:39 21c7cf7066d0 Downloading [========> ] 12.98MB/73.93MB 14:25:39 4faab25371b2 Downloading [> ] 2.162MB/158.6MB 14:25:39 257d54e26411 Extracting [===================> ] 28.41MB/73.93MB 14:25:39 5687ac571232 Downloading [======> ] 11.89MB/91.54MB 14:25:39 57703e441b07 Extracting [==========> ] 15.6MB/73.93MB 14:25:39 4faab25371b2 Downloading [==> ] 7.028MB/158.6MB 14:25:39 21c7cf7066d0 Downloading [============> ] 18.92MB/73.93MB 14:25:39 21c7cf7066d0 Downloading [============> ] 18.92MB/73.93MB 14:25:39 257d54e26411 Extracting [======================> ] 33.42MB/73.93MB 14:25:39 5687ac571232 Downloading [=========> ] 17.84MB/91.54MB 14:25:39 57703e441b07 Extracting [=============> ] 20.05MB/73.93MB 14:25:39 21c7cf7066d0 Downloading [================> ] 23.79MB/73.93MB 14:25:39 21c7cf7066d0 Downloading [================> ] 23.79MB/73.93MB 14:25:39 4faab25371b2 Downloading [===> ] 11.89MB/158.6MB 14:25:39 257d54e26411 Extracting [=========================> ] 37.32MB/73.93MB 14:25:39 5687ac571232 Downloading [============> ] 23.25MB/91.54MB 14:25:39 57703e441b07 Extracting [=================> ] 26.18MB/73.93MB 14:25:40 4faab25371b2 Downloading [=====> ] 18.38MB/158.6MB 14:25:40 21c7cf7066d0 Downloading [====================> ] 30.82MB/73.93MB 14:25:40 21c7cf7066d0 Downloading [====================> ] 30.82MB/73.93MB 14:25:40 257d54e26411 Extracting [============================> ] 41.78MB/73.93MB 14:25:40 57703e441b07 Extracting [=====================> ] 32.31MB/73.93MB 14:25:40 5687ac571232 Downloading [===============> ] 27.57MB/91.54MB 14:25:40 4faab25371b2 Downloading [========> ] 25.95MB/158.6MB 14:25:40 21c7cf7066d0 Downloading [=======================> ] 35.14MB/73.93MB 14:25:40 21c7cf7066d0 Downloading [=======================> ] 35.14MB/73.93MB 14:25:40 257d54e26411 Extracting [==============================> ] 45.68MB/73.93MB 14:25:40 57703e441b07 Extracting [=========================> ] 37.32MB/73.93MB 14:25:40 5687ac571232 Downloading [=================> ] 31.9MB/91.54MB 14:25:40 4faab25371b2 Downloading [=========> ] 30.82MB/158.6MB 14:25:40 21c7cf7066d0 Downloading [==========================> ] 39.47MB/73.93MB 14:25:40 21c7cf7066d0 Downloading [==========================> ] 39.47MB/73.93MB 14:25:40 257d54e26411 Extracting [==================================> ] 50.69MB/73.93MB 14:25:40 57703e441b07 Extracting [============================> ] 42.34MB/73.93MB 14:25:40 5687ac571232 Downloading [====================> ] 37.31MB/91.54MB 14:25:40 4faab25371b2 Downloading [===========> ] 35.14MB/158.6MB 14:25:40 21c7cf7066d0 Downloading [=============================> ] 43.79MB/73.93MB 14:25:40 21c7cf7066d0 Downloading [=============================> ] 43.79MB/73.93MB 14:25:40 257d54e26411 Extracting [=====================================> ] 55.15MB/73.93MB 14:25:40 57703e441b07 Extracting [===============================> ] 46.79MB/73.93MB 14:25:40 5687ac571232 Downloading [======================> ] 41.09MB/91.54MB 14:25:40 4faab25371b2 Downloading [============> ] 38.93MB/158.6MB 14:25:40 21c7cf7066d0 Downloading [================================> ] 47.58MB/73.93MB 14:25:40 21c7cf7066d0 Downloading [================================> ] 47.58MB/73.93MB 14:25:40 257d54e26411 Extracting [=======================================> ] 59.05MB/73.93MB 14:25:40 57703e441b07 Extracting [===================================> ] 51.81MB/73.93MB 14:25:40 5687ac571232 Downloading [========================> ] 44.87MB/91.54MB 14:25:40 4faab25371b2 Downloading [=============> ] 42.17MB/158.6MB 14:25:40 21c7cf7066d0 Downloading [==================================> ] 51.36MB/73.93MB 14:25:40 21c7cf7066d0 Downloading [==================================> ] 51.36MB/73.93MB 14:25:40 57703e441b07 Extracting [=====================================> ] 55.15MB/73.93MB 14:25:40 257d54e26411 Extracting [============================================> ] 65.18MB/73.93MB 14:25:40 5687ac571232 Downloading [==========================> ] 48.66MB/91.54MB 14:25:40 4faab25371b2 Downloading [==============> ] 45.96MB/158.6MB 14:25:40 21c7cf7066d0 Downloading [=====================================> ] 55.69MB/73.93MB 14:25:40 21c7cf7066d0 Downloading [=====================================> ] 55.69MB/73.93MB 14:25:40 57703e441b07 Extracting [========================================> ] 60.16MB/73.93MB 14:25:40 257d54e26411 Extracting [================================================> ] 71.86MB/73.93MB 14:25:40 5687ac571232 Downloading [============================> ] 52.44MB/91.54MB 14:25:40 4faab25371b2 Downloading [===============> ] 49.74MB/158.6MB 14:25:40 21c7cf7066d0 Downloading [========================================> ] 59.47MB/73.93MB 14:25:40 21c7cf7066d0 Downloading [========================================> ] 59.47MB/73.93MB 14:25:40 257d54e26411 Extracting [==================================================>] 73.93MB/73.93MB 14:25:40 57703e441b07 Extracting [============================================> ] 66.29MB/73.93MB 14:25:40 5687ac571232 Downloading [==============================> ] 56.23MB/91.54MB 14:25:40 4faab25371b2 Downloading [================> ] 53.53MB/158.6MB 14:25:40 257d54e26411 Pull complete 14:25:40 21c7cf7066d0 Downloading [==========================================> ] 62.72MB/73.93MB 14:25:40 21c7cf7066d0 Downloading [==========================================> ] 62.72MB/73.93MB 14:25:40 215302b53935 Extracting [==================================================>] 293B/293B 14:25:40 215302b53935 Extracting [==================================================>] 293B/293B 14:25:40 57703e441b07 Extracting [================================================> ] 71.86MB/73.93MB 14:25:40 5687ac571232 Downloading [================================> ] 59.47MB/91.54MB 14:25:40 4faab25371b2 Downloading [=================> ] 56.77MB/158.6MB 14:25:40 57703e441b07 Extracting [==================================================>] 73.93MB/73.93MB 14:25:41 21c7cf7066d0 Downloading [============================================> ] 66.5MB/73.93MB 14:25:41 21c7cf7066d0 Downloading [============================================> ] 66.5MB/73.93MB 14:25:41 215302b53935 Pull complete 14:25:41 eb2f448c7730 Extracting [============> ] 32.77kB/127kB 14:25:41 eb2f448c7730 Extracting [==================================================>] 127kB/127kB 14:25:41 eb2f448c7730 Extracting [==================================================>] 127kB/127kB 14:25:41 5687ac571232 Downloading [==================================> ] 62.72MB/91.54MB 14:25:41 57703e441b07 Pull complete 14:25:41 4faab25371b2 Downloading [==================> ] 60.01MB/158.6MB 14:25:41 eb2f448c7730 Pull complete 14:25:41 c8ee90c58894 Extracting [==================================================>] 1.329kB/1.329kB 14:25:41 c8ee90c58894 Extracting [==================================================>] 1.329kB/1.329kB 14:25:41 7138254c3790 Extracting [> ] 360.4kB/32.98MB 14:25:41 7138254c3790 Extracting [======> ] 4.325MB/32.98MB 14:25:41 c8ee90c58894 Pull complete 14:25:41 7138254c3790 Extracting [===========> ] 7.569MB/32.98MB 14:25:41 e30cdb86c4f0 Extracting [> ] 557.1kB/98.32MB 14:25:41 7138254c3790 Extracting [==================> ] 11.89MB/32.98MB 14:25:41 e30cdb86c4f0 Extracting [=======> ] 14.48MB/98.32MB 14:25:41 7138254c3790 Extracting [=======================> ] 15.5MB/32.98MB 14:25:41 e30cdb86c4f0 Extracting [=============> ] 27.3MB/98.32MB 14:25:41 7138254c3790 Extracting [============================> ] 19.1MB/32.98MB 14:25:41 e30cdb86c4f0 Extracting [=====================> ] 42.89MB/98.32MB 14:25:41 5687ac571232 Downloading [==================================> ] 63.8MB/91.54MB 14:25:41 21c7cf7066d0 Downloading [===============================================> ] 69.75MB/73.93MB 14:25:41 21c7cf7066d0 Downloading [===============================================> ] 69.75MB/73.93MB 14:25:41 4faab25371b2 Downloading [===================> ] 60.55MB/158.6MB 14:25:41 7138254c3790 Extracting [================================> ] 21.63MB/32.98MB 14:25:41 e30cdb86c4f0 Extracting [===========================> ] 54.03MB/98.32MB 14:25:42 e30cdb86c4f0 Extracting [==================================> ] 67.96MB/98.32MB 14:25:42 7138254c3790 Extracting [====================================> ] 23.79MB/32.98MB 14:25:42 e30cdb86c4f0 Extracting [=========================================> ] 80.77MB/98.32MB 14:25:42 7138254c3790 Extracting [========================================> ] 26.67MB/32.98MB 14:25:42 e30cdb86c4f0 Extracting [================================================> ] 95.81MB/98.32MB 14:25:42 e30cdb86c4f0 Extracting [==================================================>] 98.32MB/98.32MB 14:25:42 7138254c3790 Extracting [===========================================> ] 28.48MB/32.98MB 14:25:42 21c7cf7066d0 Downloading [=================================================> ] 73.53MB/73.93MB 14:25:42 21c7cf7066d0 Downloading [=================================================> ] 73.53MB/73.93MB 14:25:42 4faab25371b2 Downloading [====================> ] 63.8MB/158.6MB 14:25:42 5687ac571232 Downloading [====================================> ] 67.58MB/91.54MB 14:25:42 e30cdb86c4f0 Pull complete 14:25:42 c990b7e46fc8 Extracting [==================================================>] 1.299kB/1.299kB 14:25:42 c990b7e46fc8 Extracting [==================================================>] 1.299kB/1.299kB 14:25:42 21c7cf7066d0 Verifying Checksum 14:25:42 21c7cf7066d0 Download complete 14:25:42 21c7cf7066d0 Verifying Checksum 14:25:42 21c7cf7066d0 Download complete 14:25:42 7138254c3790 Extracting [=============================================> ] 29.92MB/32.98MB 14:25:42 6b867d96d427 Downloading [==================================================>] 1.153kB/1.153kB 14:25:42 6b867d96d427 Verifying Checksum 14:25:42 6b867d96d427 Download complete 14:25:42 93832cc54357 Downloading [==================================================>] 1.127kB/1.127kB 14:25:42 93832cc54357 Download complete 14:25:42 4faab25371b2 Downloading [=======================> ] 72.99MB/158.6MB 14:25:42 5687ac571232 Downloading [==========================================> ] 77.31MB/91.54MB 14:25:42 7138254c3790 Extracting [===============================================> ] 31.36MB/32.98MB 14:25:42 e8bf24a82546 Downloading [> ] 539.6kB/180.3MB 14:25:42 7138254c3790 Extracting [==================================================>] 32.98MB/32.98MB 14:25:42 21c7cf7066d0 Extracting [> ] 557.1kB/73.93MB 14:25:42 21c7cf7066d0 Extracting [> ] 557.1kB/73.93MB 14:25:42 c990b7e46fc8 Pull complete 14:25:42 pap Pulled 14:25:42 5687ac571232 Downloading [===============================================> ] 86.51MB/91.54MB 14:25:42 4faab25371b2 Downloading [=========================> ] 82.18MB/158.6MB 14:25:42 7138254c3790 Pull complete 14:25:42 78f39bed0e83 Extracting [==================================================>] 1.077kB/1.077kB 14:25:42 78f39bed0e83 Extracting [==================================================>] 1.077kB/1.077kB 14:25:42 e8bf24a82546 Downloading [=> ] 4.324MB/180.3MB 14:25:42 21c7cf7066d0 Extracting [==> ] 3.899MB/73.93MB 14:25:42 21c7cf7066d0 Extracting [==> ] 3.899MB/73.93MB 14:25:42 5687ac571232 Verifying Checksum 14:25:42 5687ac571232 Download complete 14:25:42 154b803e2d93 Downloading [=> ] 3.002kB/84.13kB 14:25:42 154b803e2d93 Download complete 14:25:42 4faab25371b2 Downloading [=============================> ] 92.99MB/158.6MB 14:25:42 e4305231c991 Downloading [==================================================>] 92B/92B 14:25:42 e4305231c991 Download complete 14:25:42 e8bf24a82546 Downloading [===> ] 11.35MB/180.3MB 14:25:42 21c7cf7066d0 Extracting [======> ] 8.913MB/73.93MB 14:25:42 21c7cf7066d0 Extracting [======> ] 8.913MB/73.93MB 14:25:42 78f39bed0e83 Pull complete 14:25:42 40796999d308 Extracting [==================================================>] 5.325kB/5.325kB 14:25:42 40796999d308 Extracting [==================================================>] 5.325kB/5.325kB 14:25:42 f469048fbe8d Downloading [==================================================>] 92B/92B 14:25:42 f469048fbe8d Verifying Checksum 14:25:42 f469048fbe8d Download complete 14:25:42 4faab25371b2 Downloading [===============================> ] 100.6MB/158.6MB 14:25:42 c189e028fabb Downloading [==================================================>] 300B/300B 14:25:42 c189e028fabb Verifying Checksum 14:25:42 c189e028fabb Download complete 14:25:42 e8bf24a82546 Downloading [====> ] 17.3MB/180.3MB 14:25:42 21c7cf7066d0 Extracting [=========> ] 13.37MB/73.93MB 14:25:42 21c7cf7066d0 Extracting [=========> ] 13.37MB/73.93MB 14:25:42 c9bd119720e4 Downloading [> ] 539.6kB/246.3MB 14:25:43 40796999d308 Pull complete 14:25:43 14ddc757aae0 Extracting [==================================================>] 5.314kB/5.314kB 14:25:43 14ddc757aae0 Extracting [==================================================>] 5.314kB/5.314kB 14:25:43 4faab25371b2 Downloading [===================================> ] 113MB/158.6MB 14:25:43 21c7cf7066d0 Extracting [============> ] 18.38MB/73.93MB 14:25:43 21c7cf7066d0 Extracting [============> ] 18.38MB/73.93MB 14:25:43 e8bf24a82546 Downloading [=======> ] 27.57MB/180.3MB 14:25:43 c9bd119720e4 Downloading [> ] 3.243MB/246.3MB 14:25:43 14ddc757aae0 Pull complete 14:25:43 ebe1cd824584 Extracting [==================================================>] 1.037kB/1.037kB 14:25:43 ebe1cd824584 Extracting [==================================================>] 1.037kB/1.037kB 14:25:43 4faab25371b2 Downloading [=======================================> ] 126.5MB/158.6MB 14:25:43 e8bf24a82546 Downloading [========> ] 31.9MB/180.3MB 14:25:43 21c7cf7066d0 Extracting [================> ] 24.51MB/73.93MB 14:25:43 21c7cf7066d0 Extracting [================> ] 24.51MB/73.93MB 14:25:43 c9bd119720e4 Downloading [=> ] 6.487MB/246.3MB 14:25:43 ebe1cd824584 Pull complete 14:25:43 d2893dc6732f Extracting [==================================================>] 1.038kB/1.038kB 14:25:43 4faab25371b2 Downloading [=============================================> ] 144.9MB/158.6MB 14:25:43 d2893dc6732f Extracting [==================================================>] 1.038kB/1.038kB 14:25:43 21c7cf7066d0 Extracting [====================> ] 30.08MB/73.93MB 14:25:43 21c7cf7066d0 Extracting [====================> ] 30.08MB/73.93MB 14:25:43 e8bf24a82546 Downloading [==========> ] 37.85MB/180.3MB 14:25:43 4faab25371b2 Verifying Checksum 14:25:43 4faab25371b2 Download complete 14:25:43 c9bd119720e4 Downloading [=> ] 9.19MB/246.3MB 14:25:43 9fa9226be034 Downloading [> ] 15.3kB/783kB 14:25:43 d2893dc6732f Pull complete 14:25:43 a23a963fcebe Extracting [==================================================>] 13.9kB/13.9kB 14:25:43 a23a963fcebe Extracting [==================================================>] 13.9kB/13.9kB 14:25:43 9fa9226be034 Downloading [==================================================>] 783kB/783kB 14:25:43 9fa9226be034 Download complete 14:25:43 9fa9226be034 Extracting [==> ] 32.77kB/783kB 14:25:43 21c7cf7066d0 Extracting [========================> ] 35.65MB/73.93MB 14:25:43 21c7cf7066d0 Extracting [========================> ] 35.65MB/73.93MB 14:25:43 e8bf24a82546 Downloading [============> ] 43.79MB/180.3MB 14:25:43 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 14:25:43 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 14:25:43 c9bd119720e4 Downloading [==> ] 14.06MB/246.3MB 14:25:43 3ecda1bfd07b Downloading [> ] 539.6kB/55.21MB 14:25:43 9fa9226be034 Extracting [==================================================>] 783kB/783kB 14:25:43 9fa9226be034 Extracting [==================================================>] 783kB/783kB 14:25:43 e8bf24a82546 Downloading [=============> ] 49.2MB/180.3MB 14:25:43 21c7cf7066d0 Extracting [==========================> ] 38.99MB/73.93MB 14:25:43 21c7cf7066d0 Extracting [==========================> ] 38.99MB/73.93MB 14:25:43 a23a963fcebe Pull complete 14:25:43 369dfa39565e Extracting [==================================================>] 13.79kB/13.79kB 14:25:43 369dfa39565e Extracting [==================================================>] 13.79kB/13.79kB 14:25:44 9fa9226be034 Pull complete 14:25:44 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 14:25:44 21c7cf7066d0 Extracting [===========================> ] 41.22MB/73.93MB 14:25:44 21c7cf7066d0 Extracting [===========================> ] 41.22MB/73.93MB 14:25:44 369dfa39565e Pull complete 14:25:44 9146eb587aa8 Extracting [==================================================>] 2.856kB/2.856kB 14:25:44 9146eb587aa8 Extracting [==================================================>] 2.856kB/2.856kB 14:25:44 21c7cf7066d0 Extracting [============================> ] 42.34MB/73.93MB 14:25:44 21c7cf7066d0 Extracting [============================> ] 42.34MB/73.93MB 14:25:44 e8bf24a82546 Downloading [==============> ] 50.82MB/180.3MB 14:25:44 3ecda1bfd07b Downloading [==> ] 2.702MB/55.21MB 14:25:44 c9bd119720e4 Downloading [===> ] 18.38MB/246.3MB 14:25:44 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 14:25:44 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 14:25:44 9146eb587aa8 Pull complete 14:25:44 a120f6888c1f Extracting [==================================================>] 2.864kB/2.864kB 14:25:44 a120f6888c1f Extracting [==================================================>] 2.864kB/2.864kB 14:25:44 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 14:25:44 e8bf24a82546 Downloading [=================> ] 63.26MB/180.3MB 14:25:44 c9bd119720e4 Downloading [=====> ] 24.87MB/246.3MB 14:25:44 3ecda1bfd07b Downloading [======> ] 7.028MB/55.21MB 14:25:44 21c7cf7066d0 Extracting [==============================> ] 45.12MB/73.93MB 14:25:44 21c7cf7066d0 Extracting [==============================> ] 45.12MB/73.93MB 14:25:44 e8bf24a82546 Downloading [=====================> ] 77.86MB/180.3MB 14:25:44 3ecda1bfd07b Downloading [===========> ] 12.98MB/55.21MB 14:25:44 c9bd119720e4 Downloading [======> ] 30.82MB/246.3MB 14:25:44 21c7cf7066d0 Extracting [================================> ] 48.46MB/73.93MB 14:25:44 21c7cf7066d0 Extracting [================================> ] 48.46MB/73.93MB 14:25:44 1617e25568b2 Pull complete 14:25:44 a120f6888c1f Pull complete 14:25:44 policy-db-migrator Pulled 14:25:46 21c7cf7066d0 Extracting [====================================> ] 54.59MB/73.93MB 14:25:46 21c7cf7066d0 Extracting [====================================> ] 54.59MB/73.93MB 14:25:46 21c7cf7066d0 Extracting [=========================================> ] 61.28MB/73.93MB 14:25:46 21c7cf7066d0 Extracting [=========================================> ] 61.28MB/73.93MB 14:25:46 21c7cf7066d0 Extracting [===============================================> ] 69.63MB/73.93MB 14:25:46 21c7cf7066d0 Extracting [===============================================> ] 69.63MB/73.93MB 14:25:46 21c7cf7066d0 Extracting [==================================================>] 73.93MB/73.93MB 14:25:46 21c7cf7066d0 Extracting [==================================================>] 73.93MB/73.93MB 14:25:46 21c7cf7066d0 Pull complete 14:25:46 21c7cf7066d0 Pull complete 14:25:46 c3cc5e3d19ac Extracting [==================================================>] 296B/296B 14:25:46 eb5e31f0ecf8 Extracting [==================================================>] 305B/305B 14:25:46 c3cc5e3d19ac Extracting [==================================================>] 296B/296B 14:25:46 eb5e31f0ecf8 Extracting [==================================================>] 305B/305B 14:25:46 c3cc5e3d19ac Pull complete 14:25:46 eb5e31f0ecf8 Pull complete 14:25:46 0d2280d71230 Extracting [============> ] 32.77kB/127.4kB 14:25:46 0d2280d71230 Extracting [==================================================>] 127.4kB/127.4kB 14:25:46 4faab25371b2 Extracting [> ] 557.1kB/158.6MB 14:25:46 0d2280d71230 Pull complete 14:25:46 984932e12fb0 Extracting [==================================================>] 1.147kB/1.147kB 14:25:46 984932e12fb0 Extracting [==================================================>] 1.147kB/1.147kB 14:25:46 4faab25371b2 Extracting [=====> ] 16.71MB/158.6MB 14:25:46 984932e12fb0 Pull complete 14:25:46 5687ac571232 Extracting [> ] 557.1kB/91.54MB 14:25:46 4faab25371b2 Extracting [=========> ] 28.97MB/158.6MB 14:25:46 5687ac571232 Extracting [=====> ] 10.58MB/91.54MB 14:25:46 4faab25371b2 Extracting [==============> ] 45.12MB/158.6MB 14:25:46 5687ac571232 Extracting [=============> ] 24.51MB/91.54MB 14:25:46 4faab25371b2 Extracting [=================> ] 55.15MB/158.6MB 14:25:46 5687ac571232 Extracting [======================> ] 40.67MB/91.54MB 14:25:46 4faab25371b2 Extracting [=====================> ] 69.63MB/158.6MB 14:25:46 5687ac571232 Extracting [============================> ] 52.92MB/91.54MB 14:25:46 4faab25371b2 Extracting [=========================> ] 80.22MB/158.6MB 14:25:46 5687ac571232 Extracting [======================================> ] 70.19MB/91.54MB 14:25:46 4faab25371b2 Extracting [=============================> ] 93.59MB/158.6MB 14:25:46 5687ac571232 Extracting [================================================> ] 88.57MB/91.54MB 14:25:46 5687ac571232 Extracting [==================================================>] 91.54MB/91.54MB 14:25:46 4faab25371b2 Extracting [=================================> ] 107MB/158.6MB 14:25:46 5687ac571232 Pull complete 14:25:46 deac262509a5 Extracting [==================================================>] 1.118kB/1.118kB 14:25:46 deac262509a5 Extracting [==================================================>] 1.118kB/1.118kB 14:25:46 4faab25371b2 Extracting [======================================> ] 121.4MB/158.6MB 14:25:46 4faab25371b2 Extracting [=======================================> ] 124.8MB/158.6MB 14:25:46 deac262509a5 Pull complete 14:25:46 api Pulled 14:25:46 4faab25371b2 Extracting [===========================================> ] 136.5MB/158.6MB 14:25:46 4faab25371b2 Extracting [===============================================> ] 152.1MB/158.6MB 14:25:46 4faab25371b2 Extracting [==================================================>] 158.6MB/158.6MB 14:25:46 4faab25371b2 Pull complete 14:25:46 6b867d96d427 Extracting [==================================================>] 1.153kB/1.153kB 14:25:46 6b867d96d427 Extracting [==================================================>] 1.153kB/1.153kB 14:25:46 6b867d96d427 Pull complete 14:25:46 93832cc54357 Extracting [==================================================>] 1.127kB/1.127kB 14:25:46 93832cc54357 Extracting [==================================================>] 1.127kB/1.127kB 14:25:46 e8bf24a82546 Downloading [=========================> ] 90.83MB/180.3MB 14:25:46 c9bd119720e4 Downloading [=======> ] 36.76MB/246.3MB 14:25:46 3ecda1bfd07b Downloading [=================> ] 18.92MB/55.21MB 14:25:46 93832cc54357 Pull complete 14:25:46 e8bf24a82546 Downloading [===========================> ] 100MB/180.3MB 14:25:46 c9bd119720e4 Downloading [========> ] 44.33MB/246.3MB 14:25:46 3ecda1bfd07b Downloading [========================> ] 27.03MB/55.21MB 14:25:48 simulator Pulled 14:25:48 e8bf24a82546 Downloading [==============================> ] 109.2MB/180.3MB 14:25:48 c9bd119720e4 Downloading [=========> ] 49.2MB/246.3MB 14:25:48 3ecda1bfd07b Downloading [===========================> ] 30.28MB/55.21MB 14:25:49 e8bf24a82546 Downloading [=================================> ] 122.2MB/180.3MB 14:25:49 c9bd119720e4 Downloading [===========> ] 58.93MB/246.3MB 14:25:49 3ecda1bfd07b Downloading [====================================> ] 40.01MB/55.21MB 14:25:49 e8bf24a82546 Downloading [======================================> ] 137.9MB/180.3MB 14:25:49 c9bd119720e4 Downloading [==============> ] 70.29MB/246.3MB 14:25:49 3ecda1bfd07b Downloading [==============================================> ] 50.82MB/55.21MB 14:25:49 3ecda1bfd07b Verifying Checksum 14:25:49 3ecda1bfd07b Download complete 14:25:49 ac9f4de4b762 Downloading [> ] 506.8kB/50.13MB 14:25:49 e8bf24a82546 Downloading [==========================================> ] 153.5MB/180.3MB 14:25:49 c9bd119720e4 Downloading [================> ] 82.72MB/246.3MB 14:25:49 3ecda1bfd07b Extracting [> ] 557.1kB/55.21MB 14:25:49 ac9f4de4b762 Downloading [======> ] 6.094MB/50.13MB 14:25:49 e8bf24a82546 Downloading [===============================================> ] 170.9MB/180.3MB 14:25:49 c9bd119720e4 Downloading [===================> ] 95.7MB/246.3MB 14:25:49 3ecda1bfd07b Extracting [=====> ] 5.571MB/55.21MB 14:25:49 e8bf24a82546 Verifying Checksum 14:25:49 e8bf24a82546 Download complete 14:25:49 ac9f4de4b762 Downloading [==========> ] 10.66MB/50.13MB 14:25:49 ea63b2e6315f Downloading [==================================================>] 605B/605B 14:25:49 ea63b2e6315f Verifying Checksum 14:25:49 ea63b2e6315f Download complete 14:25:49 fbd390d3bd00 Downloading [==================================================>] 2.675kB/2.675kB 14:25:49 fbd390d3bd00 Download complete 14:25:49 c9bd119720e4 Downloading [=====================> ] 106.5MB/246.3MB 14:25:49 9b1ac15ef728 Downloading [================================================> ] 3.011kB/3.087kB 14:25:49 9b1ac15ef728 Downloading [==================================================>] 3.087kB/3.087kB 14:25:49 9b1ac15ef728 Verifying Checksum 14:25:49 9b1ac15ef728 Download complete 14:25:49 3ecda1bfd07b Extracting [========> ] 9.47MB/55.21MB 14:25:49 8682f304eb80 Downloading [=====================================> ] 3.011kB/4.023kB 14:25:49 8682f304eb80 Downloading [==================================================>] 4.023kB/4.023kB 14:25:49 8682f304eb80 Verifying Checksum 14:25:49 8682f304eb80 Download complete 14:25:49 5fbafe078afc Downloading [==================================================>] 1.44kB/1.44kB 14:25:49 5fbafe078afc Verifying Checksum 14:25:49 5fbafe078afc Download complete 14:25:49 e8bf24a82546 Extracting [> ] 557.1kB/180.3MB 14:25:49 ac9f4de4b762 Downloading [===================> ] 19.3MB/50.13MB 14:25:49 c9bd119720e4 Downloading [=========================> ] 124.4MB/246.3MB 14:25:49 7fb53fd2ae10 Downloading [=> ] 3.009kB/138kB 14:25:49 7fb53fd2ae10 Downloading [==================================================>] 138kB/138kB 14:25:49 7fb53fd2ae10 Download complete 14:25:49 592798bd3683 Downloading [==================================================>] 100B/100B 14:25:49 592798bd3683 Verifying Checksum 14:25:49 592798bd3683 Download complete 14:25:49 3ecda1bfd07b Extracting [=============> ] 14.48MB/55.21MB 14:25:49 473fdc983780 Downloading [==================================================>] 721B/721B 14:25:49 473fdc983780 Verifying Checksum 14:25:49 473fdc983780 Download complete 14:25:49 10ac4908093d Downloading [> ] 310.2kB/30.43MB 14:25:49 e8bf24a82546 Extracting [=> ] 4.456MB/180.3MB 14:25:49 ac9f4de4b762 Downloading [============================> ] 28.95MB/50.13MB 14:25:49 c9bd119720e4 Downloading [============================> ] 139MB/246.3MB 14:25:49 3ecda1bfd07b Extracting [==================> ] 20.61MB/55.21MB 14:25:49 10ac4908093d Downloading [=======> ] 4.357MB/30.43MB 14:25:49 e8bf24a82546 Extracting [====> ] 14.48MB/180.3MB 14:25:49 ac9f4de4b762 Downloading [===================================> ] 36.06MB/50.13MB 14:25:49 c9bd119720e4 Downloading [===============================> ] 154.1MB/246.3MB 14:25:49 3ecda1bfd07b Extracting [======================> ] 25.07MB/55.21MB 14:25:49 10ac4908093d Downloading [================> ] 9.96MB/30.43MB 14:25:49 e8bf24a82546 Extracting [========> ] 28.97MB/180.3MB 14:25:49 ac9f4de4b762 Downloading [============================================> ] 44.19MB/50.13MB 14:25:49 c9bd119720e4 Downloading [==================================> ] 171.4MB/246.3MB 14:25:49 ac9f4de4b762 Verifying Checksum 14:25:49 ac9f4de4b762 Download complete 14:25:49 3ecda1bfd07b Extracting [=============================> ] 32.31MB/55.21MB 14:25:49 10ac4908093d Downloading [===========================> ] 16.81MB/30.43MB 14:25:49 e8bf24a82546 Extracting [===========> ] 42.89MB/180.3MB 14:25:49 44779101e748 Downloading [==================================================>] 1.744kB/1.744kB 14:25:49 44779101e748 Verifying Checksum 14:25:49 44779101e748 Download complete 14:25:49 c9bd119720e4 Downloading [======================================> ] 189.2MB/246.3MB 14:25:49 a721db3e3f3d Downloading [> ] 64.45kB/5.526MB 14:25:50 3ecda1bfd07b Extracting [========================================> ] 44.56MB/55.21MB 14:25:50 e8bf24a82546 Extracting [===============> ] 57.38MB/180.3MB 14:25:50 10ac4908093d Downloading [=======================================> ] 24.28MB/30.43MB 14:25:50 c9bd119720e4 Downloading [=========================================> ] 202.8MB/246.3MB 14:25:50 a721db3e3f3d Downloading [=================================> ] 3.734MB/5.526MB 14:25:50 a721db3e3f3d Verifying Checksum 14:25:50 a721db3e3f3d Download complete 14:25:50 10ac4908093d Download complete 14:25:50 3ecda1bfd07b Extracting [=================================================> ] 55.15MB/55.21MB 14:25:50 e8bf24a82546 Extracting [==================> ] 67.4MB/180.3MB 14:25:50 397a918c7da3 Download complete 14:25:50 3ecda1bfd07b Extracting [==================================================>] 55.21MB/55.21MB 14:25:50 1850a929b84a Downloading [==================================================>] 149B/149B 14:25:50 1850a929b84a Verifying Checksum 14:25:50 1850a929b84a Download complete 14:25:50 c9bd119720e4 Downloading [============================================> ] 219.5MB/246.3MB 14:25:50 634de6c90876 Downloading [===========================================> ] 3.011kB/3.49kB 14:25:50 634de6c90876 Download complete 14:25:50 3ecda1bfd07b Pull complete 14:25:50 e8bf24a82546 Extracting [=====================> ] 75.76MB/180.3MB 14:25:50 cd00854cfb1a Downloading [=====================> ] 3.011kB/6.971kB 14:25:50 cd00854cfb1a Downloading [==================================================>] 6.971kB/6.971kB 14:25:50 cd00854cfb1a Verifying Checksum 14:25:50 cd00854cfb1a Download complete 14:25:50 c9bd119720e4 Downloading [===============================================> ] 233MB/246.3MB 14:25:50 10ac4908093d Extracting [> ] 327.7kB/30.43MB 14:25:50 806be17e856d Downloading [> ] 539.6kB/89.72MB 14:25:50 ac9f4de4b762 Extracting [> ] 524.3kB/50.13MB 14:25:50 4abcf2066143 Downloading [> ] 48.06kB/3.409MB 14:25:50 c9bd119720e4 Verifying Checksum 14:25:50 c9bd119720e4 Download complete 14:25:50 e8bf24a82546 Extracting [=======================> ] 84.67MB/180.3MB 14:25:50 10ac4908093d Extracting [=====> ] 3.604MB/30.43MB 14:25:50 c0e05c86127e Downloading [==================================================>] 141B/141B 14:25:50 c0e05c86127e Verifying Checksum 14:25:50 c0e05c86127e Download complete 14:25:50 ac9f4de4b762 Extracting [====> ] 4.719MB/50.13MB 14:25:50 4abcf2066143 Downloading [=======================> ] 1.621MB/3.409MB 14:25:50 806be17e856d Downloading [=> ] 2.162MB/89.72MB 14:25:50 706651a94df6 Downloading [> ] 31.68kB/3.162MB 14:25:50 4abcf2066143 Verifying Checksum 14:25:50 4abcf2066143 Download complete 14:25:50 4abcf2066143 Extracting [> ] 65.54kB/3.409MB 14:25:50 e8bf24a82546 Extracting [=========================> ] 90.24MB/180.3MB 14:25:50 33e0a01314cc Downloading [> ] 48.06kB/4.333MB 14:25:50 10ac4908093d Extracting [============> ] 7.537MB/30.43MB 14:25:50 ac9f4de4b762 Extracting [=======> ] 7.864MB/50.13MB 14:25:50 806be17e856d Downloading [===> ] 6.487MB/89.72MB 14:25:50 706651a94df6 Downloading [=============================> ] 1.867MB/3.162MB 14:25:50 4abcf2066143 Extracting [=====> ] 393.2kB/3.409MB 14:25:50 706651a94df6 Verifying Checksum 14:25:50 706651a94df6 Download complete 14:25:50 e8bf24a82546 Extracting [==========================> ] 94.14MB/180.3MB 14:25:50 f8b444c6ff40 Downloading [===> ] 3.01kB/47.97kB 14:25:50 f8b444c6ff40 Downloading [==================================================>] 47.97kB/47.97kB 14:25:50 f8b444c6ff40 Verifying Checksum 14:25:50 f8b444c6ff40 Download complete 14:25:50 33e0a01314cc Downloading [==========================> ] 2.26MB/4.333MB 14:25:50 10ac4908093d Extracting [================> ] 9.83MB/30.43MB 14:25:50 e6c38e6d3add Downloading [======> ] 3.01kB/23.82kB 14:25:50 e6c38e6d3add Downloading [==================================================>] 23.82kB/23.82kB 14:25:50 e6c38e6d3add Verifying Checksum 14:25:50 e6c38e6d3add Download complete 14:25:50 806be17e856d Downloading [=======> ] 13.52MB/89.72MB 14:25:50 ac9f4de4b762 Extracting [==========> ] 11.01MB/50.13MB 14:25:50 4abcf2066143 Extracting [==================================================>] 3.409MB/3.409MB 14:25:50 6ca01427385e Downloading [> ] 539.6kB/61.48MB 14:25:50 33e0a01314cc Verifying Checksum 14:25:50 33e0a01314cc Download complete 14:25:50 e8bf24a82546 Extracting [===========================> ] 98.04MB/180.3MB 14:25:50 10ac4908093d Extracting [======================> ] 13.76MB/30.43MB 14:25:50 4abcf2066143 Pull complete 14:25:50 806be17e856d Downloading [============> ] 22.17MB/89.72MB 14:25:50 c0e05c86127e Extracting [==================================================>] 141B/141B 14:25:50 c0e05c86127e Extracting [==================================================>] 141B/141B 14:25:50 ac9f4de4b762 Extracting [==============> ] 14.68MB/50.13MB 14:25:50 6ca01427385e Downloading [===> ] 3.784MB/61.48MB 14:25:50 e35e8e85e24d Downloading [> ] 506.8kB/50.55MB 14:25:50 e8bf24a82546 Extracting [============================> ] 101.9MB/180.3MB 14:25:50 10ac4908093d Extracting [===============================> ] 19.01MB/30.43MB 14:25:50 806be17e856d Downloading [================> ] 30.28MB/89.72MB 14:25:50 ac9f4de4b762 Extracting [=================> ] 17.3MB/50.13MB 14:25:50 6ca01427385e Downloading [=======> ] 9.731MB/61.48MB 14:25:50 e35e8e85e24d Downloading [==> ] 2.031MB/50.55MB 14:25:50 e8bf24a82546 Extracting [=============================> ] 108.1MB/180.3MB 14:25:50 10ac4908093d Extracting [======================================> ] 23.59MB/30.43MB 14:25:50 c0e05c86127e Pull complete 14:25:50 706651a94df6 Extracting [> ] 32.77kB/3.162MB 14:25:50 806be17e856d Downloading [====================> ] 37.31MB/89.72MB 14:25:50 ac9f4de4b762 Extracting [===================> ] 19.4MB/50.13MB 14:25:50 6ca01427385e Downloading [==========> ] 13.52MB/61.48MB 14:25:50 e35e8e85e24d Downloading [=====> ] 5.586MB/50.55MB 14:25:50 e8bf24a82546 Extracting [==============================> ] 111.4MB/180.3MB 14:25:50 10ac4908093d Extracting [===========================================> ] 26.21MB/30.43MB 14:25:51 706651a94df6 Extracting [=======> ] 458.8kB/3.162MB 14:25:51 806be17e856d Downloading [=============================> ] 52.98MB/89.72MB 14:25:51 ac9f4de4b762 Extracting [=======================> ] 23.07MB/50.13MB 14:25:51 6ca01427385e Downloading [================> ] 20MB/61.48MB 14:25:51 e35e8e85e24d Downloading [===============> ] 15.24MB/50.55MB 14:25:51 10ac4908093d Extracting [==============================================> ] 28.18MB/30.43MB 14:25:51 706651a94df6 Extracting [===============================================> ] 2.982MB/3.162MB 14:25:51 806be17e856d Downloading [================================> ] 57.85MB/89.72MB 14:25:51 e8bf24a82546 Extracting [===============================> ] 114.8MB/180.3MB 14:25:51 ac9f4de4b762 Extracting [===========================> ] 27.26MB/50.13MB 14:25:51 6ca01427385e Downloading [====================> ] 25.41MB/61.48MB 14:25:51 706651a94df6 Extracting [==================================================>] 3.162MB/3.162MB 14:25:51 e35e8e85e24d Downloading [======================> ] 22.85MB/50.55MB 14:25:51 10ac4908093d Extracting [================================================> ] 29.49MB/30.43MB 14:25:51 806be17e856d Downloading [====================================> ] 65.42MB/89.72MB 14:25:51 e8bf24a82546 Extracting [================================> ] 118.1MB/180.3MB 14:25:51 ac9f4de4b762 Extracting [======================================> ] 38.27MB/50.13MB 14:25:51 6ca01427385e Downloading [=========================> ] 31.9MB/61.48MB 14:25:51 e8bf24a82546 Extracting [=================================> ] 122.6MB/180.3MB 14:25:51 e35e8e85e24d Downloading [===============================> ] 31.49MB/50.55MB 14:25:51 806be17e856d Downloading [======================================> ] 68.66MB/89.72MB 14:25:51 706651a94df6 Pull complete 14:25:51 33e0a01314cc Extracting [> ] 65.54kB/4.333MB 14:25:51 10ac4908093d Extracting [==================================================>] 30.43MB/30.43MB 14:25:51 6ca01427385e Downloading [==============================> ] 37.85MB/61.48MB 14:25:51 ac9f4de4b762 Extracting [=================================================> ] 49.28MB/50.13MB 14:25:51 e8bf24a82546 Extracting [==================================> ] 125.9MB/180.3MB 14:25:51 806be17e856d Downloading [============================================> ] 79.48MB/89.72MB 14:25:51 e35e8e85e24d Downloading [=========================================> ] 41.65MB/50.55MB 14:25:51 33e0a01314cc Extracting [===> ] 262.1kB/4.333MB 14:25:51 ac9f4de4b762 Extracting [==================================================>] 50.13MB/50.13MB 14:25:51 10ac4908093d Pull complete 14:25:51 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 14:25:51 6ca01427385e Downloading [====================================> ] 44.87MB/61.48MB 14:25:51 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 14:25:51 ac9f4de4b762 Pull complete 14:25:51 ea63b2e6315f Extracting [==================================================>] 605B/605B 14:25:51 ea63b2e6315f Extracting [==================================================>] 605B/605B 14:25:51 e8bf24a82546 Extracting [===================================> ] 128.1MB/180.3MB 14:25:51 e35e8e85e24d Downloading [================================================> ] 48.76MB/50.55MB 14:25:51 806be17e856d Downloading [===============================================> ] 85.97MB/89.72MB 14:25:51 33e0a01314cc Extracting [==================================================>] 4.333MB/4.333MB 14:25:51 e35e8e85e24d Verifying Checksum 14:25:51 e35e8e85e24d Download complete 14:25:51 6ca01427385e Downloading [==========================================> ] 51.9MB/61.48MB 14:25:51 44779101e748 Pull complete 14:25:51 a721db3e3f3d Extracting [> ] 65.54kB/5.526MB 14:25:51 33e0a01314cc Pull complete 14:25:51 f8b444c6ff40 Extracting [==================================> ] 32.77kB/47.97kB 14:25:51 f8b444c6ff40 Extracting [==================================================>] 47.97kB/47.97kB 14:25:51 d0bef95bc6b2 Downloading [============> ] 3.01kB/11.92kB 14:25:51 d0bef95bc6b2 Downloading [==================================================>] 11.92kB/11.92kB 14:25:51 d0bef95bc6b2 Verifying Checksum 14:25:51 d0bef95bc6b2 Download complete 14:25:51 806be17e856d Verifying Checksum 14:25:51 806be17e856d Download complete 14:25:51 af860903a445 Downloading [==================================================>] 1.226kB/1.226kB 14:25:51 af860903a445 Verifying Checksum 14:25:51 af860903a445 Download complete 14:25:51 ea63b2e6315f Pull complete 14:25:51 fbd390d3bd00 Extracting [==================================================>] 2.675kB/2.675kB 14:25:51 fbd390d3bd00 Extracting [==================================================>] 2.675kB/2.675kB 14:25:51 e8bf24a82546 Extracting [====================================> ] 131.5MB/180.3MB 14:25:51 6ca01427385e Downloading [=================================================> ] 61.09MB/61.48MB 14:25:51 6ca01427385e Verifying Checksum 14:25:51 6ca01427385e Download complete 14:25:51 a721db3e3f3d Extracting [====> ] 458.8kB/5.526MB 14:25:51 f8b444c6ff40 Pull complete 14:25:51 e6c38e6d3add Extracting [==================================================>] 23.82kB/23.82kB 14:25:51 e6c38e6d3add Extracting [==================================================>] 23.82kB/23.82kB 14:25:51 e8bf24a82546 Extracting [=====================================> ] 133.7MB/180.3MB 14:25:51 22ebf0e44c85 Downloading [> ] 376.8kB/37.02MB 14:25:51 22ebf0e44c85 Downloading [> ] 376.8kB/37.02MB 14:25:51 00b33c871d26 Downloading [> ] 535.8kB/253.3MB 14:25:51 00b33c871d26 Downloading [> ] 535.8kB/253.3MB 14:25:51 fbd390d3bd00 Pull complete 14:25:51 a721db3e3f3d Extracting [======================================> ] 4.26MB/5.526MB 14:25:51 9b1ac15ef728 Extracting [==================================================>] 3.087kB/3.087kB 14:25:51 9b1ac15ef728 Extracting [==================================================>] 3.087kB/3.087kB 14:25:51 6b11e56702ad Downloading [> ] 77.48kB/7.707MB 14:25:51 6b11e56702ad Downloading [> ] 77.48kB/7.707MB 14:25:51 e6c38e6d3add Pull complete 14:25:51 22ebf0e44c85 Downloading [====================> ] 15.08MB/37.02MB 14:25:51 22ebf0e44c85 Downloading [====================> ] 15.08MB/37.02MB 14:25:51 e8bf24a82546 Extracting [======================================> ] 138.1MB/180.3MB 14:25:51 00b33c871d26 Downloading [==> ] 10.21MB/253.3MB 14:25:51 00b33c871d26 Downloading [==> ] 10.21MB/253.3MB 14:25:51 a721db3e3f3d Extracting [==========================================> ] 4.653MB/5.526MB 14:25:51 9b1ac15ef728 Pull complete 14:25:51 6b11e56702ad Verifying Checksum 14:25:51 6b11e56702ad Verifying Checksum 14:25:51 6b11e56702ad Download complete 14:25:51 6b11e56702ad Download complete 14:25:51 8682f304eb80 Extracting [==================================================>] 4.023kB/4.023kB 14:25:51 8682f304eb80 Extracting [==================================================>] 4.023kB/4.023kB 14:25:51 22ebf0e44c85 Downloading [=========================================> ] 30.93MB/37.02MB 14:25:51 22ebf0e44c85 Downloading [=========================================> ] 30.93MB/37.02MB 14:25:51 6ca01427385e Extracting [> ] 557.1kB/61.48MB 14:25:51 00b33c871d26 Downloading [=====> ] 27.42MB/253.3MB 14:25:51 00b33c871d26 Downloading [=====> ] 27.42MB/253.3MB 14:25:51 e8bf24a82546 Extracting [=======================================> ] 142MB/180.3MB 14:25:52 a721db3e3f3d Extracting [===========================================> ] 4.85MB/5.526MB 14:25:52 22ebf0e44c85 Verifying Checksum 14:25:52 22ebf0e44c85 Verifying Checksum 14:25:52 22ebf0e44c85 Download complete 14:25:52 22ebf0e44c85 Download complete 14:25:52 a721db3e3f3d Extracting [==================================================>] 5.526MB/5.526MB 14:25:52 53d69aa7d3fc Downloading [=> ] 718B/19.96kB 14:25:52 53d69aa7d3fc Downloading [=> ] 718B/19.96kB 14:25:52 53d69aa7d3fc Verifying Checksum 14:25:52 53d69aa7d3fc Download complete 14:25:52 53d69aa7d3fc Verifying Checksum 14:25:52 53d69aa7d3fc Download complete 14:25:52 6ca01427385e Extracting [==> ] 3.342MB/61.48MB 14:25:52 00b33c871d26 Downloading [=======> ] 40.39MB/253.3MB 14:25:52 00b33c871d26 Downloading [=======> ] 40.39MB/253.3MB 14:25:52 e8bf24a82546 Extracting [========================================> ] 144.8MB/180.3MB 14:25:52 22ebf0e44c85 Extracting [> ] 393.2kB/37.02MB 14:25:52 22ebf0e44c85 Extracting [> ] 393.2kB/37.02MB 14:25:52 00b33c871d26 Downloading [==========> ] 54.91MB/253.3MB 14:25:52 00b33c871d26 Downloading [==========> ] 54.91MB/253.3MB 14:25:52 91ef9543149d Downloading [================================> ] 719B/1.101kB 14:25:52 91ef9543149d Downloading [================================> ] 719B/1.101kB 14:25:52 91ef9543149d Verifying Checksum 14:25:52 91ef9543149d Download complete 14:25:52 91ef9543149d Downloading [==================================================>] 1.101kB/1.101kB 14:25:52 91ef9543149d Verifying Checksum 14:25:52 91ef9543149d Download complete 14:25:52 8682f304eb80 Pull complete 14:25:52 6ca01427385e Extracting [=====> ] 6.685MB/61.48MB 14:25:52 5fbafe078afc Extracting [==================================================>] 1.44kB/1.44kB 14:25:52 5fbafe078afc Extracting [==================================================>] 1.44kB/1.44kB 14:25:52 a721db3e3f3d Pull complete 14:25:52 1850a929b84a Extracting [==================================================>] 149B/149B 14:25:52 1850a929b84a Extracting [==================================================>] 149B/149B 14:25:52 e8bf24a82546 Extracting [========================================> ] 147.1MB/180.3MB 14:25:52 a3ab11953ef9 Downloading [> ] 399.7kB/39.52MB 14:25:52 a3ab11953ef9 Downloading [> ] 399.7kB/39.52MB 14:25:52 22ebf0e44c85 Extracting [====> ] 3.146MB/37.02MB 14:25:52 22ebf0e44c85 Extracting [====> ] 3.146MB/37.02MB 14:25:52 00b33c871d26 Downloading [==============> ] 71.58MB/253.3MB 14:25:52 00b33c871d26 Downloading [==============> ] 71.58MB/253.3MB 14:25:52 6ca01427385e Extracting [=======> ] 9.47MB/61.48MB 14:25:52 e8bf24a82546 Extracting [=========================================> ] 150.4MB/180.3MB 14:25:52 1850a929b84a Pull complete 14:25:52 397a918c7da3 Extracting [==================================================>] 327B/327B 14:25:52 a3ab11953ef9 Downloading [==============> ] 11.79MB/39.52MB 14:25:52 a3ab11953ef9 Downloading [==============> ] 11.79MB/39.52MB 14:25:52 397a918c7da3 Extracting [==================================================>] 327B/327B 14:25:52 2ec4f59af178 Downloading [========================================> ] 721B/881B 14:25:52 2ec4f59af178 Downloading [========================================> ] 721B/881B 14:25:52 2ec4f59af178 Downloading [==================================================>] 881B/881B 14:25:52 2ec4f59af178 Downloading [==================================================>] 881B/881B 14:25:52 2ec4f59af178 Verifying Checksum 14:25:52 2ec4f59af178 Download complete 14:25:52 2ec4f59af178 Download complete 14:25:52 5fbafe078afc Pull complete 14:25:52 7fb53fd2ae10 Extracting [===========> ] 32.77kB/138kB 14:25:52 7fb53fd2ae10 Extracting [==================================================>] 138kB/138kB 14:25:52 7fb53fd2ae10 Extracting [==================================================>] 138kB/138kB 14:25:52 22ebf0e44c85 Extracting [=======> ] 5.898MB/37.02MB 14:25:52 22ebf0e44c85 Extracting [=======> ] 5.898MB/37.02MB 14:25:52 00b33c871d26 Downloading [=================> ] 87.17MB/253.3MB 14:25:52 00b33c871d26 Downloading [=================> ] 87.17MB/253.3MB 14:25:52 e8bf24a82546 Extracting [==========================================> ] 152.6MB/180.3MB 14:25:52 a3ab11953ef9 Downloading [================================> ] 26.05MB/39.52MB 14:25:52 a3ab11953ef9 Downloading [================================> ] 26.05MB/39.52MB 14:25:52 6ca01427385e Extracting [=========> ] 11.7MB/61.48MB 14:25:52 8b7e81cd5ef1 Downloading [==================================================>] 131B/131B 14:25:52 8b7e81cd5ef1 Verifying Checksum 14:25:52 8b7e81cd5ef1 Verifying Checksum 14:25:52 8b7e81cd5ef1 Download complete 14:25:52 8b7e81cd5ef1 Download complete 14:25:52 00b33c871d26 Downloading [====================> ] 103.8MB/253.3MB 14:25:52 00b33c871d26 Downloading [====================> ] 103.8MB/253.3MB 14:25:52 7fb53fd2ae10 Pull complete 14:25:52 397a918c7da3 Pull complete 14:25:52 592798bd3683 Extracting [==================================================>] 100B/100B 14:25:52 592798bd3683 Extracting [==================================================>] 100B/100B 14:25:52 22ebf0e44c85 Extracting [===========> ] 8.651MB/37.02MB 14:25:52 22ebf0e44c85 Extracting [===========> ] 8.651MB/37.02MB 14:25:52 a3ab11953ef9 Verifying Checksum 14:25:52 a3ab11953ef9 Verifying Checksum 14:25:52 a3ab11953ef9 Download complete 14:25:52 a3ab11953ef9 Download complete 14:25:52 e8bf24a82546 Extracting [===========================================> ] 156MB/180.3MB 14:25:52 6ca01427385e Extracting [==========> ] 13.37MB/61.48MB 14:25:52 c52916c1316e Downloading [==================================================>] 171B/171B 14:25:52 c52916c1316e Downloading [==================================================>] 171B/171B 14:25:52 c52916c1316e Verifying Checksum 14:25:52 c52916c1316e Verifying Checksum 14:25:52 c52916c1316e Download complete 14:25:52 c52916c1316e Download complete 14:25:52 00b33c871d26 Downloading [=======================> ] 117.3MB/253.3MB 14:25:52 00b33c871d26 Downloading [=======================> ] 117.3MB/253.3MB 14:25:52 22ebf0e44c85 Extracting [===============> ] 11.4MB/37.02MB 14:25:52 22ebf0e44c85 Extracting [===============> ] 11.4MB/37.02MB 14:25:52 e8bf24a82546 Extracting [============================================> ] 160.4MB/180.3MB 14:25:52 6ca01427385e Extracting [================> ] 20.61MB/61.48MB 14:25:52 806be17e856d Extracting [> ] 557.1kB/89.72MB 14:25:52 00b33c871d26 Downloading [=======================> ] 120.5MB/253.3MB 14:25:52 00b33c871d26 Downloading [=======================> ] 120.5MB/253.3MB 14:25:52 7a1cb9ad7f75 Downloading [> ] 527.6kB/115.2MB 14:25:52 592798bd3683 Pull complete 14:25:52 0a92c7dea7af Downloading [==========> ] 720B/3.449kB 14:25:52 0a92c7dea7af Downloading [==================================================>] 3.449kB/3.449kB 14:25:52 0a92c7dea7af Verifying Checksum 14:25:52 0a92c7dea7af Download complete 14:25:52 473fdc983780 Extracting [==================================================>] 721B/721B 14:25:52 e8bf24a82546 Extracting [============================================> ] 161MB/180.3MB 14:25:52 473fdc983780 Extracting [==================================================>] 721B/721B 14:25:52 6ca01427385e Extracting [=================> ] 21.73MB/61.48MB 14:25:52 22ebf0e44c85 Extracting [================> ] 12.58MB/37.02MB 14:25:52 22ebf0e44c85 Extracting [================> ] 12.58MB/37.02MB 14:25:52 806be17e856d Extracting [> ] 1.671MB/89.72MB 14:25:52 00b33c871d26 Downloading [=========================> ] 131.2MB/253.3MB 14:25:52 00b33c871d26 Downloading [=========================> ] 131.2MB/253.3MB 14:25:52 22ebf0e44c85 Extracting [====================> ] 15.34MB/37.02MB 14:25:52 22ebf0e44c85 Extracting [====================> ] 15.34MB/37.02MB 14:25:52 e8bf24a82546 Extracting [=============================================> ] 163.2MB/180.3MB 14:25:52 6ca01427385e Extracting [====================> ] 25.07MB/61.48MB 14:25:52 7a1cb9ad7f75 Downloading [====> ] 9.644MB/115.2MB 14:25:52 473fdc983780 Pull complete 14:25:52 806be17e856d Extracting [==> ] 4.456MB/89.72MB 14:25:52 00b33c871d26 Downloading [===========================> ] 137.7MB/253.3MB 14:25:52 00b33c871d26 Downloading [===========================> ] 137.7MB/253.3MB 14:25:52 prometheus Pulled 14:25:52 22ebf0e44c85 Extracting [=========================> ] 18.87MB/37.02MB 14:25:52 22ebf0e44c85 Extracting [=========================> ] 18.87MB/37.02MB 14:25:52 6ca01427385e Extracting [=====================> ] 26.74MB/61.48MB 14:25:52 7a1cb9ad7f75 Downloading [========> ] 19.3MB/115.2MB 14:25:52 e8bf24a82546 Extracting [=============================================> ] 164.9MB/180.3MB 14:25:53 00b33c871d26 Downloading [============================> ] 145.2MB/253.3MB 14:25:53 00b33c871d26 Downloading [============================> ] 145.2MB/253.3MB 14:25:53 806be17e856d Extracting [====> ] 7.799MB/89.72MB 14:25:53 22ebf0e44c85 Extracting [===============================> ] 23.59MB/37.02MB 14:25:53 22ebf0e44c85 Extracting [===============================> ] 23.59MB/37.02MB 14:25:53 6ca01427385e Extracting [=========================> ] 31.2MB/61.48MB 14:25:53 7a1cb9ad7f75 Downloading [=============> ] 32.18MB/115.2MB 14:25:53 e8bf24a82546 Extracting [===============================================> ] 169.9MB/180.3MB 14:25:53 d93f69e96600 Downloading [> ] 535.8kB/115.2MB 14:25:53 00b33c871d26 Downloading [==============================> ] 154.3MB/253.3MB 14:25:53 00b33c871d26 Downloading [==============================> ] 154.3MB/253.3MB 14:25:53 806be17e856d Extracting [=====> ] 10.58MB/89.72MB 14:25:53 22ebf0e44c85 Extracting [=====================================> ] 27.53MB/37.02MB 14:25:53 22ebf0e44c85 Extracting [=====================================> ] 27.53MB/37.02MB 14:25:53 7a1cb9ad7f75 Downloading [===================> ] 43.97MB/115.2MB 14:25:53 6ca01427385e Extracting [===========================> ] 33.98MB/61.48MB 14:25:53 e8bf24a82546 Extracting [===============================================> ] 172.1MB/180.3MB 14:25:53 d93f69e96600 Downloading [=====> ] 13.43MB/115.2MB 14:25:53 00b33c871d26 Downloading [================================> ] 165MB/253.3MB 14:25:53 00b33c871d26 Downloading [================================> ] 165MB/253.3MB 14:25:53 806be17e856d Extracting [=======> ] 13.37MB/89.72MB 14:25:53 22ebf0e44c85 Extracting [===========================================> ] 32.24MB/37.02MB 14:25:53 22ebf0e44c85 Extracting [===========================================> ] 32.24MB/37.02MB 14:25:53 7a1cb9ad7f75 Downloading [=======================> ] 54.14MB/115.2MB 14:25:53 6ca01427385e Extracting [==============================> ] 37.32MB/61.48MB 14:25:53 e8bf24a82546 Extracting [================================================> ] 173.8MB/180.3MB 14:25:53 d93f69e96600 Downloading [===========> ] 25.83MB/115.2MB 14:25:53 00b33c871d26 Downloading [==================================> ] 173.1MB/253.3MB 14:25:53 00b33c871d26 Downloading [==================================> ] 173.1MB/253.3MB 14:25:53 806be17e856d Extracting [========> ] 15.6MB/89.72MB 14:25:53 7a1cb9ad7f75 Downloading [==========================> ] 61.65MB/115.2MB 14:25:53 22ebf0e44c85 Extracting [==============================================> ] 34.21MB/37.02MB 14:25:53 22ebf0e44c85 Extracting [==============================================> ] 34.21MB/37.02MB 14:25:53 6ca01427385e Extracting [================================> ] 39.55MB/61.48MB 14:25:53 e8bf24a82546 Extracting [================================================> ] 176MB/180.3MB 14:25:53 d93f69e96600 Downloading [===============> ] 35.49MB/115.2MB 14:25:53 00b33c871d26 Downloading [====================================> ] 185.4MB/253.3MB 14:25:53 00b33c871d26 Downloading [====================================> ] 185.4MB/253.3MB 14:25:53 806be17e856d Extracting [==========> ] 18.94MB/89.72MB 14:25:53 7a1cb9ad7f75 Downloading [===============================> ] 72.37MB/115.2MB 14:25:53 22ebf0e44c85 Extracting [================================================> ] 36.18MB/37.02MB 14:25:53 22ebf0e44c85 Extracting [================================================> ] 36.18MB/37.02MB 14:25:53 6ca01427385e Extracting [==================================> ] 42.34MB/61.48MB 14:25:53 22ebf0e44c85 Extracting [==================================================>] 37.02MB/37.02MB 14:25:53 22ebf0e44c85 Extracting [==================================================>] 37.02MB/37.02MB 14:25:53 d93f69e96600 Downloading [=====================> ] 49.46MB/115.2MB 14:25:53 e8bf24a82546 Extracting [=================================================> ] 177.7MB/180.3MB 14:25:53 7a1cb9ad7f75 Downloading [===================================> ] 82.51MB/115.2MB 14:25:53 00b33c871d26 Downloading [======================================> ] 197.2MB/253.3MB 14:25:53 00b33c871d26 Downloading [======================================> ] 197.2MB/253.3MB 14:25:53 806be17e856d Extracting [============> ] 22.28MB/89.72MB 14:25:53 6ca01427385e Extracting [===================================> ] 44.01MB/61.48MB 14:25:53 d93f69e96600 Downloading [==========================> ] 61.84MB/115.2MB 14:25:53 7a1cb9ad7f75 Downloading [========================================> ] 94.35MB/115.2MB 14:25:53 e8bf24a82546 Extracting [=================================================> ] 179.4MB/180.3MB 14:25:53 00b33c871d26 Downloading [=========================================> ] 210.1MB/253.3MB 14:25:53 00b33c871d26 Downloading [=========================================> ] 210.1MB/253.3MB 14:25:53 6ca01427385e Extracting [======================================> ] 47.35MB/61.48MB 14:25:53 d93f69e96600 Downloading [=================================> ] 76.88MB/115.2MB 14:25:53 806be17e856d Extracting [==============> ] 25.62MB/89.72MB 14:25:53 e8bf24a82546 Extracting [==================================================>] 180.3MB/180.3MB 14:25:53 00b33c871d26 Downloading [===========================================> ] 221.9MB/253.3MB 14:25:53 00b33c871d26 Downloading [===========================================> ] 221.9MB/253.3MB 14:25:53 7a1cb9ad7f75 Downloading [=============================================> ] 104MB/115.2MB 14:25:53 d93f69e96600 Downloading [===================================> ] 82.24MB/115.2MB 14:25:53 6ca01427385e Extracting [========================================> ] 49.58MB/61.48MB 14:25:53 806be17e856d Extracting [==============> ] 26.74MB/89.72MB 14:25:54 7a1cb9ad7f75 Verifying Checksum 14:25:54 7a1cb9ad7f75 Download complete 14:25:54 00b33c871d26 Downloading [==============================================> ] 235.9MB/253.3MB 14:25:54 00b33c871d26 Downloading [==============================================> ] 235.9MB/253.3MB 14:25:54 6ca01427385e Extracting [===========================================> ] 52.92MB/61.48MB 14:25:54 806be17e856d Extracting [================> ] 28.97MB/89.72MB 14:25:54 d93f69e96600 Downloading [=======================================> ] 91.9MB/115.2MB 14:25:54 22ebf0e44c85 Pull complete 14:25:54 22ebf0e44c85 Pull complete 14:25:54 bbb9d15c45a1 Downloading [=========> ] 719B/3.633kB 14:25:54 bbb9d15c45a1 Downloading [==================================================>] 3.633kB/3.633kB 14:25:54 bbb9d15c45a1 Verifying Checksum 14:25:54 bbb9d15c45a1 Download complete 14:25:54 00b33c871d26 Downloading [================================================> ] 245MB/253.3MB 14:25:54 00b33c871d26 Downloading [================================================> ] 245MB/253.3MB 14:25:54 d93f69e96600 Downloading [=========================================> ] 96.73MB/115.2MB 14:25:54 806be17e856d Extracting [=================> ] 30.64MB/89.72MB 14:25:54 6ca01427385e Extracting [=============================================> ] 55.71MB/61.48MB 14:25:54 e8bf24a82546 Pull complete 14:25:54 00b33c871d26 Verifying Checksum 14:25:54 00b33c871d26 Download complete 14:25:54 00b33c871d26 Verifying Checksum 14:25:54 00b33c871d26 Download complete 14:25:54 d93f69e96600 Downloading [==============================================> ] 106.4MB/115.2MB 14:25:54 806be17e856d Extracting [==================> ] 33.42MB/89.72MB 14:25:54 6ca01427385e Extracting [================================================> ] 59.05MB/61.48MB 14:25:54 00b33c871d26 Extracting [> ] 557.1kB/253.3MB 14:25:54 00b33c871d26 Extracting [> ] 557.1kB/253.3MB 14:25:54 d93f69e96600 Verifying Checksum 14:25:54 d93f69e96600 Download complete 14:25:54 806be17e856d Extracting [====================> ] 36.21MB/89.72MB 14:25:54 6ca01427385e Extracting [=================================================> ] 60.72MB/61.48MB 14:25:54 00b33c871d26 Extracting [==> ] 11.7MB/253.3MB 14:25:54 00b33c871d26 Extracting [==> ] 11.7MB/253.3MB 14:25:54 6ca01427385e Extracting [==================================================>] 61.48MB/61.48MB 14:25:54 806be17e856d Extracting [======================> ] 39.55MB/89.72MB 14:25:54 00b33c871d26 Extracting [====> ] 22.28MB/253.3MB 14:25:54 00b33c871d26 Extracting [====> ] 22.28MB/253.3MB 14:25:54 154b803e2d93 Extracting [===================> ] 32.77kB/84.13kB 14:25:54 154b803e2d93 Extracting [==================================================>] 84.13kB/84.13kB 14:25:54 154b803e2d93 Extracting [==================================================>] 84.13kB/84.13kB 14:25:54 806be17e856d Extracting [=======================> ] 42.34MB/89.72MB 14:25:54 00b33c871d26 Extracting [=====> ] 28.41MB/253.3MB 14:25:54 00b33c871d26 Extracting [=====> ] 28.41MB/253.3MB 14:25:54 806be17e856d Extracting [=======================> ] 42.89MB/89.72MB 14:25:54 00b33c871d26 Extracting [======> ] 30.64MB/253.3MB 14:25:54 00b33c871d26 Extracting [======> ] 30.64MB/253.3MB 14:25:54 806be17e856d Extracting [=========================> ] 45.68MB/89.72MB 14:25:54 00b33c871d26 Extracting [=========> ] 46.79MB/253.3MB 14:25:54 00b33c871d26 Extracting [=========> ] 46.79MB/253.3MB 14:25:55 806be17e856d Extracting [============================> ] 51.81MB/89.72MB 14:25:55 00b33c871d26 Extracting [===========> ] 57.38MB/253.3MB 14:25:55 00b33c871d26 Extracting [===========> ] 57.38MB/253.3MB 14:25:55 806be17e856d Extracting [===============================> ] 57.38MB/89.72MB 14:25:55 00b33c871d26 Extracting [=============> ] 70.75MB/253.3MB 14:25:55 00b33c871d26 Extracting [=============> ] 70.75MB/253.3MB 14:25:55 806be17e856d Extracting [==================================> ] 61.83MB/89.72MB 14:25:55 00b33c871d26 Extracting [================> ] 83MB/253.3MB 14:25:55 00b33c871d26 Extracting [================> ] 83MB/253.3MB 14:25:55 806be17e856d Extracting [=====================================> ] 67.4MB/89.72MB 14:25:55 00b33c871d26 Extracting [==================> ] 93.59MB/253.3MB 14:25:55 00b33c871d26 Extracting [==================> ] 93.59MB/253.3MB 14:25:55 806be17e856d Extracting [=======================================> ] 70.75MB/89.72MB 14:25:55 00b33c871d26 Extracting [====================> ] 102.5MB/253.3MB 14:25:55 00b33c871d26 Extracting [====================> ] 102.5MB/253.3MB 14:25:55 00b33c871d26 Extracting [=====================> ] 106.4MB/253.3MB 14:25:55 00b33c871d26 Extracting [=====================> ] 106.4MB/253.3MB 14:25:56 00b33c871d26 Extracting [=====================> ] 107MB/253.3MB 14:25:56 00b33c871d26 Extracting [=====================> ] 107MB/253.3MB 14:25:56 806be17e856d Extracting [========================================> ] 72.97MB/89.72MB 14:25:56 00b33c871d26 Extracting [======================> ] 112MB/253.3MB 14:25:56 00b33c871d26 Extracting [======================> ] 112MB/253.3MB 14:25:56 806be17e856d Extracting [==========================================> ] 75.76MB/89.72MB 14:25:56 6ca01427385e Pull complete 14:25:56 154b803e2d93 Pull complete 14:25:56 00b33c871d26 Extracting [======================> ] 115.9MB/253.3MB 14:25:56 00b33c871d26 Extracting [======================> ] 115.9MB/253.3MB 14:25:56 806be17e856d Extracting [============================================> ] 79.66MB/89.72MB 14:25:56 00b33c871d26 Extracting [=======================> ] 120.9MB/253.3MB 14:25:56 00b33c871d26 Extracting [=======================> ] 120.9MB/253.3MB 14:25:56 806be17e856d Extracting [==============================================> ] 83MB/89.72MB 14:25:56 00b33c871d26 Extracting [========================> ] 125.9MB/253.3MB 14:25:56 00b33c871d26 Extracting [========================> ] 125.9MB/253.3MB 14:25:56 e4305231c991 Extracting [==================================================>] 92B/92B 14:25:56 e4305231c991 Extracting [==================================================>] 92B/92B 14:25:56 00b33c871d26 Extracting [=========================> ] 127.6MB/253.3MB 14:25:56 00b33c871d26 Extracting [=========================> ] 127.6MB/253.3MB 14:25:57 00b33c871d26 Extracting [=========================> ] 130.9MB/253.3MB 14:25:57 00b33c871d26 Extracting [=========================> ] 130.9MB/253.3MB 14:25:57 806be17e856d Extracting [===============================================> ] 85.23MB/89.72MB 14:25:57 e35e8e85e24d Extracting [> ] 524.3kB/50.55MB 14:25:57 00b33c871d26 Extracting [==========================> ] 134.8MB/253.3MB 14:25:57 00b33c871d26 Extracting [==========================> ] 134.8MB/253.3MB 14:25:57 806be17e856d Extracting [================================================> ] 86.9MB/89.72MB 14:25:57 e35e8e85e24d Extracting [=> ] 1.049MB/50.55MB 14:25:57 00b33c871d26 Extracting [===========================> ] 139.8MB/253.3MB 14:25:57 00b33c871d26 Extracting [===========================> ] 139.8MB/253.3MB 14:25:57 00b33c871d26 Extracting [============================> ] 144.3MB/253.3MB 14:25:57 00b33c871d26 Extracting [============================> ] 144.3MB/253.3MB 14:25:57 00b33c871d26 Extracting [=============================> ] 149.8MB/253.3MB 14:25:57 00b33c871d26 Extracting [=============================> ] 149.8MB/253.3MB 14:25:57 00b33c871d26 Extracting [==============================> ] 154.9MB/253.3MB 14:25:57 00b33c871d26 Extracting [==============================> ] 154.9MB/253.3MB 14:25:57 00b33c871d26 Extracting [===============================> ] 159.9MB/253.3MB 14:25:57 00b33c871d26 Extracting [===============================> ] 159.9MB/253.3MB 14:25:57 806be17e856d Extracting [=================================================> ] 88.01MB/89.72MB 14:25:57 e35e8e85e24d Extracting [=> ] 1.573MB/50.55MB 14:25:57 00b33c871d26 Extracting [================================> ] 163.8MB/253.3MB 14:25:57 00b33c871d26 Extracting [================================> ] 163.8MB/253.3MB 14:25:57 e4305231c991 Pull complete 14:25:57 00b33c871d26 Extracting [================================> ] 166.6MB/253.3MB 14:25:57 00b33c871d26 Extracting [================================> ] 166.6MB/253.3MB 14:25:57 f469048fbe8d Extracting [==================================================>] 92B/92B 14:25:57 f469048fbe8d Extracting [==================================================>] 92B/92B 14:25:57 e35e8e85e24d Extracting [==> ] 2.097MB/50.55MB 14:25:57 806be17e856d Extracting [=================================================> ] 89.13MB/89.72MB 14:25:58 00b33c871d26 Extracting [=================================> ] 168.8MB/253.3MB 14:25:58 00b33c871d26 Extracting [=================================> ] 168.8MB/253.3MB 14:25:58 806be17e856d Extracting [==================================================>] 89.72MB/89.72MB 14:25:58 00b33c871d26 Extracting [=================================> ] 169.9MB/253.3MB 14:25:58 00b33c871d26 Extracting [=================================> ] 169.9MB/253.3MB 14:25:58 e35e8e85e24d Extracting [===> ] 3.67MB/50.55MB 14:25:58 e35e8e85e24d Extracting [====> ] 4.194MB/50.55MB 14:25:59 f469048fbe8d Pull complete 14:25:59 00b33c871d26 Extracting [=================================> ] 171.6MB/253.3MB 14:25:59 00b33c871d26 Extracting [=================================> ] 171.6MB/253.3MB 14:25:59 e35e8e85e24d Extracting [====> ] 4.719MB/50.55MB 14:25:59 e35e8e85e24d Extracting [=======> ] 7.34MB/50.55MB 14:25:59 00b33c871d26 Extracting [==================================> ] 173.2MB/253.3MB 14:25:59 00b33c871d26 Extracting [==================================> ] 173.2MB/253.3MB 14:25:59 00b33c871d26 Extracting [==================================> ] 174.4MB/253.3MB 14:25:59 00b33c871d26 Extracting [==================================> ] 174.4MB/253.3MB 14:25:59 e35e8e85e24d Extracting [========> ] 8.913MB/50.55MB 14:25:59 00b33c871d26 Extracting [==================================> ] 176MB/253.3MB 14:25:59 00b33c871d26 Extracting [==================================> ] 176MB/253.3MB 14:25:59 e35e8e85e24d Extracting [==========> ] 10.49MB/50.55MB 14:25:59 806be17e856d Pull complete 14:25:59 c189e028fabb Extracting [==================================================>] 300B/300B 14:25:59 c189e028fabb Extracting [==================================================>] 300B/300B 14:25:59 00b33c871d26 Extracting [==================================> ] 177.1MB/253.3MB 14:25:59 00b33c871d26 Extracting [==================================> ] 177.1MB/253.3MB 14:25:59 00b33c871d26 Extracting [===================================> ] 178.8MB/253.3MB 14:25:59 00b33c871d26 Extracting [===================================> ] 178.8MB/253.3MB 14:25:59 e35e8e85e24d Extracting [==========> ] 11.01MB/50.55MB 14:26:00 00b33c871d26 Extracting [====================================> ] 183.3MB/253.3MB 14:26:00 00b33c871d26 Extracting [====================================> ] 183.3MB/253.3MB 14:26:00 e35e8e85e24d Extracting [============> ] 13.11MB/50.55MB 14:26:00 00b33c871d26 Extracting [====================================> ] 186.6MB/253.3MB 14:26:00 00b33c871d26 Extracting [====================================> ] 186.6MB/253.3MB 14:26:00 e35e8e85e24d Extracting [=================> ] 17.83MB/50.55MB 14:26:00 00b33c871d26 Extracting [=====================================> ] 188.3MB/253.3MB 14:26:00 00b33c871d26 Extracting [=====================================> ] 188.3MB/253.3MB 14:26:00 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 14:26:00 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 14:26:00 e35e8e85e24d Extracting [====================> ] 20.97MB/50.55MB 14:26:00 00b33c871d26 Extracting [=====================================> ] 190.5MB/253.3MB 14:26:00 00b33c871d26 Extracting [=====================================> ] 190.5MB/253.3MB 14:26:00 e35e8e85e24d Extracting [========================> ] 24.64MB/50.55MB 14:26:00 00b33c871d26 Extracting [======================================> ] 192.7MB/253.3MB 14:26:00 00b33c871d26 Extracting [======================================> ] 192.7MB/253.3MB 14:26:00 e35e8e85e24d Extracting [==========================> ] 26.74MB/50.55MB 14:26:00 00b33c871d26 Extracting [======================================> ] 195.5MB/253.3MB 14:26:00 00b33c871d26 Extracting [======================================> ] 195.5MB/253.3MB 14:26:00 e35e8e85e24d Extracting [=============================> ] 29.36MB/50.55MB 14:26:00 00b33c871d26 Extracting [=======================================> ] 198.3MB/253.3MB 14:26:00 00b33c871d26 Extracting [=======================================> ] 198.3MB/253.3MB 14:26:00 e35e8e85e24d Extracting [===============================> ] 31.98MB/50.55MB 14:26:00 e35e8e85e24d Extracting [=================================> ] 34.08MB/50.55MB 14:26:00 00b33c871d26 Extracting [=======================================> ] 200MB/253.3MB 14:26:00 00b33c871d26 Extracting [=======================================> ] 200MB/253.3MB 14:26:01 e35e8e85e24d Extracting [=====================================> ] 37.75MB/50.55MB 14:26:01 00b33c871d26 Extracting [========================================> ] 202.8MB/253.3MB 14:26:01 00b33c871d26 Extracting [========================================> ] 202.8MB/253.3MB 14:26:01 e35e8e85e24d Extracting [========================================> ] 40.89MB/50.55MB 14:26:01 00b33c871d26 Extracting [========================================> ] 205MB/253.3MB 14:26:01 00b33c871d26 Extracting [========================================> ] 205MB/253.3MB 14:26:01 e35e8e85e24d Extracting [===========================================> ] 44.04MB/50.55MB 14:26:01 00b33c871d26 Extracting [========================================> ] 207.2MB/253.3MB 14:26:01 00b33c871d26 Extracting [========================================> ] 207.2MB/253.3MB 14:26:01 00b33c871d26 Extracting [=========================================> ] 211.1MB/253.3MB 14:26:01 00b33c871d26 Extracting [=========================================> ] 211.1MB/253.3MB 14:26:01 e35e8e85e24d Extracting [================================================> ] 48.76MB/50.55MB 14:26:01 00b33c871d26 Extracting [==========================================> ] 212.8MB/253.3MB 14:26:01 00b33c871d26 Extracting [==========================================> ] 212.8MB/253.3MB 14:26:01 00b33c871d26 Extracting [==========================================> ] 214.5MB/253.3MB 14:26:01 e35e8e85e24d Extracting [=================================================> ] 50.33MB/50.55MB 14:26:01 00b33c871d26 Extracting [==========================================> ] 214.5MB/253.3MB 14:26:01 e35e8e85e24d Extracting [==================================================>] 50.55MB/50.55MB 14:26:01 00b33c871d26 Extracting [==========================================> ] 215.6MB/253.3MB 14:26:01 00b33c871d26 Extracting [==========================================> ] 215.6MB/253.3MB 14:26:02 00b33c871d26 Extracting [==========================================> ] 216.1MB/253.3MB 14:26:02 00b33c871d26 Extracting [==========================================> ] 216.1MB/253.3MB 14:26:02 00b33c871d26 Extracting [===========================================> ] 218.4MB/253.3MB 14:26:02 00b33c871d26 Extracting [===========================================> ] 218.4MB/253.3MB 14:26:02 c189e028fabb Pull complete 14:26:02 00b33c871d26 Extracting [===========================================> ] 220MB/253.3MB 14:26:02 00b33c871d26 Extracting [===========================================> ] 220MB/253.3MB 14:26:02 634de6c90876 Pull complete 14:26:02 00b33c871d26 Extracting [===========================================> ] 220.6MB/253.3MB 14:26:02 00b33c871d26 Extracting [===========================================> ] 220.6MB/253.3MB 14:26:02 00b33c871d26 Extracting [============================================> ] 223.9MB/253.3MB 14:26:02 00b33c871d26 Extracting [============================================> ] 223.9MB/253.3MB 14:26:02 00b33c871d26 Extracting [=============================================> ] 229MB/253.3MB 14:26:02 00b33c871d26 Extracting [=============================================> ] 229MB/253.3MB 14:26:03 00b33c871d26 Extracting [=============================================> ] 231.2MB/253.3MB 14:26:03 00b33c871d26 Extracting [=============================================> ] 231.2MB/253.3MB 14:26:03 00b33c871d26 Extracting [=============================================> ] 232.8MB/253.3MB 14:26:03 00b33c871d26 Extracting [=============================================> ] 232.8MB/253.3MB 14:26:03 00b33c871d26 Extracting [==============================================> ] 237.9MB/253.3MB 14:26:03 00b33c871d26 Extracting [==============================================> ] 237.9MB/253.3MB 14:26:03 cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB 14:26:03 cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB 14:26:03 e35e8e85e24d Pull complete 14:26:03 00b33c871d26 Extracting [===============================================> ] 241.2MB/253.3MB 14:26:03 00b33c871d26 Extracting [===============================================> ] 241.2MB/253.3MB 14:26:03 c9bd119720e4 Extracting [> ] 557.1kB/246.3MB 14:26:03 00b33c871d26 Extracting [=================================================> ] 249.6MB/253.3MB 14:26:03 00b33c871d26 Extracting [=================================================> ] 249.6MB/253.3MB 14:26:03 c9bd119720e4 Extracting [> ] 2.785MB/246.3MB 14:26:03 00b33c871d26 Extracting [==================================================>] 253.3MB/253.3MB 14:26:03 00b33c871d26 Extracting [==================================================>] 253.3MB/253.3MB 14:26:03 c9bd119720e4 Extracting [===> ] 15.04MB/246.3MB 14:26:03 d0bef95bc6b2 Extracting [==================================================>] 11.92kB/11.92kB 14:26:03 d0bef95bc6b2 Extracting [==================================================>] 11.92kB/11.92kB 14:26:04 c9bd119720e4 Extracting [====> ] 21.17MB/246.3MB 14:26:04 c9bd119720e4 Extracting [====> ] 23.4MB/246.3MB 14:26:04 c9bd119720e4 Extracting [======> ] 32.31MB/246.3MB 14:26:04 c9bd119720e4 Extracting [=========> ] 46.79MB/246.3MB 14:26:05 c9bd119720e4 Extracting [===========> ] 59.05MB/246.3MB 14:26:05 c9bd119720e4 Extracting [==============> ] 72.42MB/246.3MB 14:26:05 c9bd119720e4 Extracting [================> ] 81.33MB/246.3MB 14:26:05 c9bd119720e4 Extracting [==================> ] 92.47MB/246.3MB 14:26:05 c9bd119720e4 Extracting [======================> ] 108.6MB/246.3MB 14:26:05 c9bd119720e4 Extracting [=========================> ] 123.7MB/246.3MB 14:26:05 c9bd119720e4 Extracting [===========================> ] 135.9MB/246.3MB 14:26:05 c9bd119720e4 Extracting [============================> ] 138.7MB/246.3MB 14:26:06 c9bd119720e4 Extracting [==============================> ] 152.1MB/246.3MB 14:26:06 c9bd119720e4 Extracting [=================================> ] 167.1MB/246.3MB 14:26:06 c9bd119720e4 Extracting [====================================> ] 179.4MB/246.3MB 14:26:06 c9bd119720e4 Extracting [=======================================> ] 193.9MB/246.3MB 14:26:06 c9bd119720e4 Extracting [==========================================> ] 210MB/246.3MB 14:26:06 c9bd119720e4 Extracting [=============================================> ] 223.4MB/246.3MB 14:26:06 c9bd119720e4 Extracting [================================================> ] 240.6MB/246.3MB 14:26:06 c9bd119720e4 Extracting [==================================================>] 246.3MB/246.3MB 14:26:06 cd00854cfb1a Pull complete 14:26:07 00b33c871d26 Pull complete 14:26:07 00b33c871d26 Pull complete 14:26:07 d0bef95bc6b2 Pull complete 14:26:07 6b11e56702ad Extracting [> ] 98.3kB/7.707MB 14:26:07 6b11e56702ad Extracting [> ] 98.3kB/7.707MB 14:26:07 c9bd119720e4 Pull complete 14:26:07 af860903a445 Extracting [==================================================>] 1.226kB/1.226kB 14:26:07 af860903a445 Extracting [==================================================>] 1.226kB/1.226kB 14:26:07 mariadb Pulled 14:26:07 6b11e56702ad Extracting [===============================> ] 4.915MB/7.707MB 14:26:07 6b11e56702ad Extracting [===============================> ] 4.915MB/7.707MB 14:26:07 6b11e56702ad Extracting [==================================================>] 7.707MB/7.707MB 14:26:07 6b11e56702ad Extracting [==================================================>] 7.707MB/7.707MB 14:26:07 apex-pdp Pulled 14:26:07 af860903a445 Pull complete 14:26:07 6b11e56702ad Pull complete 14:26:07 6b11e56702ad Pull complete 14:26:07 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 14:26:07 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 14:26:07 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 14:26:07 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 14:26:07 grafana Pulled 14:26:07 53d69aa7d3fc Pull complete 14:26:07 53d69aa7d3fc Pull complete 14:26:07 a3ab11953ef9 Extracting [> ] 426kB/39.52MB 14:26:07 a3ab11953ef9 Extracting [> ] 426kB/39.52MB 14:26:07 a3ab11953ef9 Extracting [=================> ] 13.63MB/39.52MB 14:26:07 a3ab11953ef9 Extracting [=================> ] 13.63MB/39.52MB 14:26:07 a3ab11953ef9 Extracting [======================================> ] 30.24MB/39.52MB 14:26:07 a3ab11953ef9 Extracting [======================================> ] 30.24MB/39.52MB 14:26:07 a3ab11953ef9 Extracting [==================================================>] 39.52MB/39.52MB 14:26:07 a3ab11953ef9 Extracting [==================================================>] 39.52MB/39.52MB 14:26:12 a3ab11953ef9 Pull complete 14:26:12 a3ab11953ef9 Pull complete 14:26:13 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 14:26:13 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 14:26:13 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 14:26:13 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 14:26:19 91ef9543149d Pull complete 14:26:19 91ef9543149d Pull complete 14:26:22 2ec4f59af178 Extracting [==================================================>] 881B/881B 14:26:22 2ec4f59af178 Extracting [==================================================>] 881B/881B 14:26:22 2ec4f59af178 Extracting [==================================================>] 881B/881B 14:26:22 2ec4f59af178 Extracting [==================================================>] 881B/881B 14:26:23 2ec4f59af178 Pull complete 14:26:23 2ec4f59af178 Pull complete 14:26:23 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 14:26:23 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 14:26:23 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 14:26:23 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 14:26:23 8b7e81cd5ef1 Pull complete 14:26:23 8b7e81cd5ef1 Pull complete 14:26:23 c52916c1316e Extracting [==================================================>] 171B/171B 14:26:23 c52916c1316e Extracting [==================================================>] 171B/171B 14:26:23 c52916c1316e Extracting [==================================================>] 171B/171B 14:26:23 c52916c1316e Extracting [==================================================>] 171B/171B 14:26:23 c52916c1316e Pull complete 14:26:23 c52916c1316e Pull complete 14:26:23 d93f69e96600 Extracting [> ] 557.1kB/115.2MB 14:26:23 7a1cb9ad7f75 Extracting [> ] 557.1kB/115.2MB 14:26:23 d93f69e96600 Extracting [=====> ] 13.37MB/115.2MB 14:26:23 7a1cb9ad7f75 Extracting [=====> ] 12.81MB/115.2MB 14:26:23 d93f69e96600 Extracting [===========> ] 26.74MB/115.2MB 14:26:23 7a1cb9ad7f75 Extracting [==========> ] 24.51MB/115.2MB 14:26:24 d93f69e96600 Extracting [==================> ] 43.45MB/115.2MB 14:26:24 7a1cb9ad7f75 Extracting [================> ] 37.32MB/115.2MB 14:26:24 d93f69e96600 Extracting [========================> ] 57.38MB/115.2MB 14:26:24 7a1cb9ad7f75 Extracting [======================> ] 50.69MB/115.2MB 14:26:24 d93f69e96600 Extracting [==============================> ] 70.75MB/115.2MB 14:26:24 7a1cb9ad7f75 Extracting [=============================> ] 67.4MB/115.2MB 14:26:24 d93f69e96600 Extracting [======================================> ] 88.57MB/115.2MB 14:26:24 7a1cb9ad7f75 Extracting [====================================> ] 84.12MB/115.2MB 14:26:24 d93f69e96600 Extracting [==========================================> ] 98.6MB/115.2MB 14:26:24 7a1cb9ad7f75 Extracting [===========================================> ] 99.71MB/115.2MB 14:26:24 d93f69e96600 Extracting [================================================> ] 110.9MB/115.2MB 14:26:24 7a1cb9ad7f75 Extracting [================================================> ] 110.9MB/115.2MB 14:26:24 7a1cb9ad7f75 Extracting [==================================================>] 115.2MB/115.2MB 14:26:24 d93f69e96600 Extracting [==================================================>] 115.2MB/115.2MB 14:26:24 7a1cb9ad7f75 Pull complete 14:26:24 d93f69e96600 Pull complete 14:26:24 bbb9d15c45a1 Extracting [==================================================>] 3.633kB/3.633kB 14:26:24 bbb9d15c45a1 Extracting [==================================================>] 3.633kB/3.633kB 14:26:24 0a92c7dea7af Extracting [==================================================>] 3.449kB/3.449kB 14:26:24 0a92c7dea7af Extracting [==================================================>] 3.449kB/3.449kB 14:26:25 bbb9d15c45a1 Pull complete 14:26:25 0a92c7dea7af Pull complete 14:26:25 kafka Pulled 14:26:25 zookeeper Pulled 14:26:25 Network compose_default Creating 14:26:25 Network compose_default Created 14:26:25 Container zookeeper Creating 14:26:25 Container simulator Creating 14:26:25 Container mariadb Creating 14:26:25 Container prometheus Creating 14:26:46 Container zookeeper Created 14:26:46 Container kafka Creating 14:26:46 Container prometheus Created 14:26:46 Container grafana Creating 14:26:46 Container mariadb Created 14:26:46 Container policy-db-migrator Creating 14:26:46 Container simulator Created 14:26:46 Container policy-db-migrator Created 14:26:46 Container policy-api Creating 14:26:46 Container grafana Created 14:26:46 Container kafka Created 14:26:46 Container policy-api Created 14:26:46 Container policy-pap Creating 14:26:46 Container policy-pap Created 14:26:46 Container policy-apex-pdp Creating 14:26:46 Container policy-apex-pdp Created 14:26:46 Container mariadb Starting 14:26:46 Container simulator Starting 14:26:46 Container prometheus Starting 14:26:46 Container zookeeper Starting 14:26:47 Container mariadb Started 14:26:47 Container policy-db-migrator Starting 14:26:48 Container policy-db-migrator Started 14:26:48 Container policy-api Starting 14:26:48 Container zookeeper Started 14:26:48 Container kafka Starting 14:26:49 Container prometheus Started 14:26:49 Container grafana Starting 14:26:50 Container simulator Started 14:26:50 Container policy-api Started 14:26:52 Container grafana Started 14:26:53 Container kafka Started 14:26:53 Container policy-pap Starting 14:26:53 Container policy-pap Started 14:26:53 Container policy-apex-pdp Starting 14:26:54 Container policy-apex-pdp Started 14:26:54 Prometheus server: http://localhost:30259 14:26:54 Grafana server: http://localhost:30269 14:27:04 Waiting for REST to come up on localhost port 30003... 14:27:04 NAMES STATUS 14:27:04 policy-apex-pdp Up 10 seconds 14:27:04 policy-pap Up 10 seconds 14:27:04 policy-api Up 13 seconds 14:27:04 kafka Up 11 seconds 14:27:04 grafana Up 12 seconds 14:27:04 zookeeper Up 16 seconds 14:27:04 simulator Up 14 seconds 14:27:04 mariadb Up 17 seconds 14:27:04 prometheus Up 15 seconds 14:27:09 NAMES STATUS 14:27:09 policy-apex-pdp Up 15 seconds 14:27:09 policy-pap Up 15 seconds 14:27:09 policy-api Up 18 seconds 14:27:09 kafka Up 16 seconds 14:27:09 grafana Up 17 seconds 14:27:09 zookeeper Up 21 seconds 14:27:09 simulator Up 19 seconds 14:27:09 mariadb Up 22 seconds 14:27:09 prometheus Up 20 seconds 14:27:14 NAMES STATUS 14:27:14 policy-apex-pdp Up 20 seconds 14:27:14 policy-pap Up 20 seconds 14:27:14 policy-api Up 24 seconds 14:27:14 kafka Up 21 seconds 14:27:14 grafana Up 22 seconds 14:27:14 zookeeper Up 26 seconds 14:27:14 simulator Up 24 seconds 14:27:14 mariadb Up 27 seconds 14:27:14 prometheus Up 25 seconds 14:27:19 NAMES STATUS 14:27:19 policy-apex-pdp Up 25 seconds 14:27:19 policy-pap Up 25 seconds 14:27:19 policy-api Up 29 seconds 14:27:19 kafka Up 26 seconds 14:27:19 grafana Up 27 seconds 14:27:19 zookeeper Up 31 seconds 14:27:19 simulator Up 29 seconds 14:27:19 mariadb Up 32 seconds 14:27:19 prometheus Up 30 seconds 14:27:24 NAMES STATUS 14:27:24 policy-apex-pdp Up 30 seconds 14:27:24 policy-pap Up 31 seconds 14:27:24 policy-api Up 34 seconds 14:27:24 kafka Up 31 seconds 14:27:24 grafana Up 32 seconds 14:27:24 zookeeper Up 36 seconds 14:27:24 simulator Up 34 seconds 14:27:24 mariadb Up 37 seconds 14:27:24 prometheus Up 35 seconds 14:27:25 Waiting for REST to come up on localhost port 30001... 14:27:25 NAMES STATUS 14:27:25 policy-apex-pdp Up 30 seconds 14:27:25 policy-pap Up 31 seconds 14:27:25 policy-api Up 34 seconds 14:27:25 kafka Up 31 seconds 14:27:25 grafana Up 32 seconds 14:27:25 zookeeper Up 36 seconds 14:27:25 simulator Up 34 seconds 14:27:25 mariadb Up 37 seconds 14:27:25 prometheus Up 35 seconds 14:27:45 Build docker image for robot framework 14:27:45 Error: No such image: policy-csit-robot 14:27:45 Cloning into '/w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp/csit/resources/tests/models'... 14:27:46 Build robot framework docker image 14:27:46 Sending build context to Docker daemon 16.49MB 14:27:46 Step 1/9 : FROM nexus3.onap.org:10001/library/python:3.10-slim-bullseye 14:27:46 3.10-slim-bullseye: Pulling from library/python 14:27:46 76956b537f14: Pulling fs layer 14:27:46 f75f1b8a4051: Pulling fs layer 14:27:46 f9adc358e0b8: Pulling fs layer 14:27:46 f66e101ef41f: Pulling fs layer 14:27:46 b913137adf9e: Pulling fs layer 14:27:46 f66e101ef41f: Waiting 14:27:46 b913137adf9e: Waiting 14:27:46 f75f1b8a4051: Download complete 14:27:46 f66e101ef41f: Verifying Checksum 14:27:46 f66e101ef41f: Download complete 14:27:46 b913137adf9e: Download complete 14:27:46 f9adc358e0b8: Download complete 14:27:47 76956b537f14: Verifying Checksum 14:27:47 76956b537f14: Download complete 14:27:48 76956b537f14: Pull complete 14:27:48 f75f1b8a4051: Pull complete 14:27:48 f9adc358e0b8: Pull complete 14:27:48 f66e101ef41f: Pull complete 14:27:49 b913137adf9e: Pull complete 14:27:49 Digest: sha256:fc8ba6002a477d6536097e9cc529c593cd6621a66c81e601b5353265afd10775 14:27:49 Status: Downloaded newer image for nexus3.onap.org:10001/library/python:3.10-slim-bullseye 14:27:49 ---> 08150e0479fc 14:27:49 Step 2/9 : ARG CSIT_SCRIPT=${CSIT_SCRIPT} 14:27:51 ---> Running in 47121f7e5671 14:27:51 Removing intermediate container 47121f7e5671 14:27:51 ---> f0ae5763ce92 14:27:51 Step 3/9 : ARG ROBOT_FILE=${ROBOT_FILE} 14:27:51 ---> Running in f1976de4cff1 14:27:51 Removing intermediate container f1976de4cff1 14:27:51 ---> eb7ec891c2c7 14:27:51 Step 4/9 : ENV ROBOT_WORKSPACE=/opt/robotworkspace ROBOT_FILE=$ROBOT_FILE TEST_ENV=$TEST_ENV 14:27:51 ---> Running in 5ababf4d3806 14:27:51 Removing intermediate container 5ababf4d3806 14:27:51 ---> 9a3cd5e291ff 14:27:51 Step 5/9 : RUN python3 -m pip -qq install --upgrade pip && python3 -m pip -qq install --upgrade --extra-index-url="https://nexus3.onap.org/repository/PyPi.staging/simple" 'robotframework-onap==0.6.0.*' --pre && python3 -m pip -qq install --upgrade confluent-kafka && python3 -m pip freeze 14:27:51 ---> Running in fb362da14342 14:28:03 bcrypt==4.1.3 14:28:03 certifi==2024.6.2 14:28:03 cffi==1.17.0rc1 14:28:03 charset-normalizer==3.3.2 14:28:03 confluent-kafka==2.4.0 14:28:03 cryptography==42.0.8 14:28:03 decorator==5.1.1 14:28:03 deepdiff==7.0.1 14:28:03 dnspython==2.6.1 14:28:03 future==1.0.0 14:28:03 idna==3.7 14:28:03 Jinja2==3.1.4 14:28:03 jsonpath-rw==1.4.0 14:28:03 kafka-python==2.0.2 14:28:03 MarkupSafe==2.1.5 14:28:03 more-itertools==5.0.0 14:28:03 ordered-set==4.1.0 14:28:03 paramiko==3.4.0 14:28:03 pbr==6.0.0 14:28:03 ply==3.11 14:28:03 protobuf==5.27.2 14:28:03 pycparser==2.22 14:28:03 PyNaCl==1.5.0 14:28:03 PyYAML==6.0.2rc1 14:28:03 requests==2.32.3 14:28:03 robotframework==7.0.1 14:28:03 robotframework-onap==0.6.0.dev105 14:28:03 robotframework-requests==1.0a11 14:28:03 robotlibcore-temp==1.0.2 14:28:03 six==1.16.0 14:28:03 urllib3==2.2.2 14:28:07 Removing intermediate container fb362da14342 14:28:07 ---> 883fdc986b3d 14:28:07 Step 6/9 : RUN mkdir -p ${ROBOT_WORKSPACE} 14:28:07 ---> Running in 92491a93ae70 14:28:08 Removing intermediate container 92491a93ae70 14:28:08 ---> eebb060d9e8a 14:28:08 Step 7/9 : COPY scripts/run-test.sh tests/ ${ROBOT_WORKSPACE}/ 14:28:09 ---> 064bb994d280 14:28:09 Step 8/9 : WORKDIR ${ROBOT_WORKSPACE} 14:28:10 ---> Running in ab701490590b 14:28:10 Removing intermediate container ab701490590b 14:28:10 ---> e0175ee69bf8 14:28:10 Step 9/9 : CMD ["sh", "-c", "./run-test.sh" ] 14:28:10 ---> Running in bfd25930b280 14:28:10 Removing intermediate container bfd25930b280 14:28:10 ---> ce4c16e8accd 14:28:10 Successfully built ce4c16e8accd 14:28:10 Successfully tagged policy-csit-robot:latest 14:28:13 top - 14:28:13 up 46 min, 0 users, load average: 2.56, 1.55, 0.63 14:28:13 Tasks: 205 total, 1 running, 130 sleeping, 0 stopped, 0 zombie 14:28:13 %Cpu(s): 1.3 us, 0.3 sy, 0.0 ni, 97.6 id, 0.8 wa, 0.0 hi, 0.0 si, 0.0 st 14:28:13 14:28:13 total used free shared buff/cache available 14:28:13 Mem: 31G 2.7G 22G 1.3M 6.5G 28G 14:28:13 Swap: 1.0G 0B 1.0G 14:28:13 14:28:13 NAMES STATUS 14:28:13 policy-apex-pdp Up About a minute 14:28:13 policy-pap Up About a minute 14:28:13 policy-api Up About a minute 14:28:13 kafka Up About a minute 14:28:13 grafana Up About a minute 14:28:13 zookeeper Up About a minute 14:28:13 simulator Up About a minute 14:28:13 mariadb Up About a minute 14:28:13 prometheus Up About a minute 14:28:13 14:28:15 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 14:28:15 95cd78644f92 policy-apex-pdp 0.75% 182.1MiB / 31.41GiB 0.57% 35kB / 51.5kB 0B / 0B 50 14:28:15 8923e1ceb5da policy-pap 0.82% 502.3MiB / 31.41GiB 1.56% 123kB / 148kB 0B / 149MB 64 14:28:15 509428bc3020 policy-api 0.12% 473.1MiB / 31.41GiB 1.47% 989kB / 673kB 0B / 0B 54 14:28:15 9be495a3f092 kafka 1.60% 384.2MiB / 31.41GiB 1.19% 157kB / 150kB 0B / 561kB 85 14:28:15 5e03ed1e0f86 grafana 0.06% 65.15MiB / 31.41GiB 0.20% 24.4kB / 4.96kB 0B / 26.5MB 20 14:28:15 774ad6eb722a zookeeper 0.09% 100.5MiB / 31.41GiB 0.31% 57.3kB / 51.2kB 0B / 410kB 61 14:28:15 2ca4a8edeb7d simulator 0.06% 119.5MiB / 31.41GiB 0.37% 1.43kB / 0B 225kB / 0B 77 14:28:15 e465b8fab7c1 mariadb 0.02% 102MiB / 31.41GiB 0.32% 969kB / 1.22MB 11MB / 71.9MB 29 14:28:15 ee08e65f2188 prometheus 0.00% 20.35MiB / 31.41GiB 0.06% 67.5kB / 3.19kB 0B / 8.19kB 13 14:28:15 14:28:15 time="2024-07-03T14:28:15Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." 14:28:16 Container policy-csit Creating 14:28:16 Container policy-csit Created 14:28:16 Attaching to policy-csit 14:28:16 policy-csit | Invoking the robot tests from: apex-pdp-test.robot apex-slas.robot 14:28:16 policy-csit | Run Robot test 14:28:16 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 14:28:16 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 14:28:16 policy-csit | -v POLICY_API_IP:policy-api:6969 14:28:16 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 14:28:16 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 14:28:16 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 14:28:16 policy-csit | -v APEX_IP:policy-apex-pdp:6969 14:28:16 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 14:28:16 policy-csit | -v KAFKA_IP:kafka:9092 14:28:16 policy-csit | -v PROMETHEUS_IP:prometheus:9090 14:28:16 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 14:28:16 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 14:28:16 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 14:28:16 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 14:28:16 policy-csit | -v TEMP_FOLDER:/tmp/distribution 14:28:16 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 14:28:16 policy-csit | -v TEST_ENV: 14:28:16 policy-csit | -v JAEGER_IP:jaeger:16686 14:28:16 policy-csit | Starting Robot test suites ... 14:28:17 policy-csit | ============================================================================== 14:28:17 policy-csit | Apex-Pdp-Test & Apex-Slas 14:28:17 policy-csit | ============================================================================== 14:28:17 policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Pdp-Test 14:28:17 policy-csit | ============================================================================== 14:28:17 policy-csit | Healthcheck :: Runs Apex PDP Health check | PASS | 14:28:17 policy-csit | ------------------------------------------------------------------------------ 14:28:17 policy-csit | ExecuteApexSampleDomainPolicy | FAIL | 14:28:17 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 14:28:17 policy-csit | ------------------------------------------------------------------------------ 14:28:18 policy-csit | ExecuteApexTestPnfPolicy | FAIL | 14:28:18 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 14:28:18 policy-csit | ------------------------------------------------------------------------------ 14:28:18 policy-csit | ExecuteApexTestPnfPolicyWithMetadataSet | FAIL | 14:28:18 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 14:28:18 policy-csit | ------------------------------------------------------------------------------ 14:28:18 policy-csit | Metrics :: Verify policy-apex-pdp is exporting prometheus metrics | FAIL | 14:28:18 policy-csit | '# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds. 14:28:18 policy-csit | # TYPE process_cpu_seconds_total counter 14:28:18 policy-csit | process_cpu_seconds_total 8.34 14:28:18 policy-csit | # HELP process_start_time_seconds Start time of the process since unix epoch in seconds. 14:28:18 policy-csit | # TYPE process_start_time_seconds gauge 14:28:18 policy-csit | process_start_time_seconds 1.720016842817E9 14:28:18 policy-csit | # HELP process_open_fds Number of open file descriptors. 14:28:18 policy-csit | # TYPE process_open_fds gauge 14:28:18 policy-csit | process_open_fds 387.0 14:28:18 policy-csit | # HELP process_max_fds Maximum number of open file descriptors. 14:28:18 policy-csit | # TYPE process_max_fds gauge 14:28:18 policy-csit | process_max_fds 1048576.0 14:28:18 policy-csit | # HELP process_virtual_memory_bytes Virtual memory size in bytes. 14:28:18 policy-csit | # TYPE process_virtual_memory_bytes gauge 14:28:18 policy-csit | process_virtual_memory_bytes 1.0461679616E10 14:28:18 policy-csit | # HELP process_resident_memory_bytes Resident memory size in bytes. 14:28:18 policy-csit | # TYPE process_resident_memory_bytes gauge 14:28:18 policy-csit | process_resident_memory_bytes 1.99868416E8 14:28:18 policy-csit | [ Message content over the limit has been removed. ] 14:28:18 policy-csit | # TYPE pdpa_policy_deployments_total counter 14:28:18 policy-csit | # HELP jvm_memory_pool_allocated_bytes_created Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. 14:28:18 policy-csit | # TYPE jvm_memory_pool_allocated_bytes_created gauge 14:28:18 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'profiled nmethods'",} 1.720016844472E9 14:28:18 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Old Gen",} 1.720016844501E9 14:28:18 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Eden Space",} 1.720016844501E9 14:28:18 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-profiled nmethods'",} 1.720016844501E9 14:28:18 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Survivor Space",} 1.720016844501E9 14:28:18 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="Compressed Class Space",} 1.720016844501E9 14:28:18 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="Metaspace",} 1.720016844501E9 14:28:18 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-nmethods'",} 1.720016844501E9 14:28:18 policy-csit | ' does not contain 'pdpa_policy_deployments_total{operation="deploy",status="TOTAL",} 3.0' 14:28:18 policy-csit | ------------------------------------------------------------------------------ 14:28:18 policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Pdp-Test | FAIL | 14:28:18 policy-csit | 5 tests, 1 passed, 4 failed 14:28:18 policy-csit | ============================================================================== 14:28:18 policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Slas 14:28:18 policy-csit | ============================================================================== 14:28:18 policy-csit | Healthcheck :: Runs Apex PDP Health check | PASS | 14:28:18 policy-csit | ------------------------------------------------------------------------------ 14:28:19 policy-csit | ValidatePolicyExecutionAndEventRateLowComplexity :: Validate that ... | FAIL | 14:28:19 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 14:28:19 policy-csit | ------------------------------------------------------------------------------ 14:28:19 policy-csit | ValidatePolicyExecutionAndEventRateModerateComplexity :: Validate ... | FAIL | 14:28:19 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 14:28:19 policy-csit | ------------------------------------------------------------------------------ 14:28:19 policy-csit | ValidatePolicyExecutionAndEventRateHighComplexity :: Validate that... | FAIL | 14:28:19 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 14:28:19 policy-csit | ------------------------------------------------------------------------------ 14:29:19 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 14:29:19 policy-csit | ------------------------------------------------------------------------------ 14:29:19 policy-csit | ValidatePolicyExecutionTimes :: Validate policy execution times us... | FAIL | 14:29:19 policy-csit | Resolving variable '${resp['data']['result'][0]['value'][1]}' failed: IndexError: list index out of range 14:29:19 policy-csit | ------------------------------------------------------------------------------ 14:29:19 policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Slas | FAIL | 14:29:19 policy-csit | 6 tests, 2 passed, 4 failed 14:29:19 policy-csit | ============================================================================== 14:29:19 policy-csit | Apex-Pdp-Test & Apex-Slas | FAIL | 14:29:19 policy-csit | 11 tests, 3 passed, 8 failed 14:29:19 policy-csit | ============================================================================== 14:29:19 policy-csit | Output: /tmp/results/output.xml 14:29:19 policy-csit | Log: /tmp/results/log.html 14:29:19 policy-csit | Report: /tmp/results/report.html 14:29:19 policy-csit | RESULT: 8 14:29:20 policy-csit exited with code 8 14:29:20 NAMES STATUS 14:29:20 policy-apex-pdp Up 2 minutes 14:29:20 policy-pap Up 2 minutes 14:29:20 policy-api Up 2 minutes 14:29:20 kafka Up 2 minutes 14:29:20 grafana Up 2 minutes 14:29:20 zookeeper Up 2 minutes 14:29:20 simulator Up 2 minutes 14:29:20 mariadb Up 2 minutes 14:29:20 prometheus Up 2 minutes 14:29:20 Shut down started! 14:29:22 Collecting logs from docker compose containers... 14:29:22 time="2024-07-03T14:29:22Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." 14:29:22 time="2024-07-03T14:29:22Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." 14:29:23 time="2024-07-03T14:29:23Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." 14:29:23 time="2024-07-03T14:29:23Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." 14:29:23 time="2024-07-03T14:29:23Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." 14:29:24 time="2024-07-03T14:29:24Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." 14:29:24 time="2024-07-03T14:29:24Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." 14:29:24 time="2024-07-03T14:29:24Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." 14:29:25 time="2024-07-03T14:29:25Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." 14:29:25 time="2024-07-03T14:29:25Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." 14:29:25 time="2024-07-03T14:29:25Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." 14:29:26 time="2024-07-03T14:29:26Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." 14:29:26 ======== Logs from grafana ======== 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.217973073Z level=info msg="Starting Grafana" version=11.1.0 commit=5b85c4c2fcf5d32d4f68aaef345c53096359b2f1 branch=HEAD compiled=2024-07-03T14:26:52Z 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218479195Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218493275Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218497595Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218501055Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218503865Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218509805Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218518365Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218525376Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218528526Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218531576Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218535116Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218539016Z level=info msg=Target target=[all] 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218555596Z level=info msg="Path Home" path=/usr/share/grafana 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218559476Z level=info msg="Path Data" path=/var/lib/grafana 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218562646Z level=info msg="Path Logs" path=/var/log/grafana 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218565766Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218569346Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 14:29:26 grafana | logger=settings t=2024-07-03T14:26:52.218572767Z level=info msg="App mode production" 14:29:26 grafana | logger=featuremgmt t=2024-07-03T14:26:52.220970487Z level=info msg=FeatureToggles correlations=true managedPluginsInstall=true logRowsPopoverMenu=true prometheusDataplane=true lokiStructuredMetadata=true recordedQueriesMulti=true transformationsRedesign=true publicDashboards=true alertingSimplifiedRouting=true prometheusMetricEncyclopedia=true betterPageScrolling=true lokiQueryHints=true kubernetesPlaylists=true awsDatasourcesNewFormStyling=true alertingInsights=true annotationPermissionUpdate=true topnav=true lokiQuerySplitting=true lokiMetricDataplane=true logsExploreTableVisualisation=true nestedFolders=true awsAsyncQueryCaching=true ssoSettingsApi=true exploreMetrics=true angularDeprecationUI=true cloudWatchCrossAccountQuerying=true logsInfiniteScrolling=true dataplaneFrontendFallback=true dashgpt=true cloudWatchNewLabelParsing=true alertingNoDataErrorExecution=true prometheusConfigOverhaulAuth=true exploreContentOutline=true recoveryThreshold=true panelMonitoring=true influxdbBackendMigration=true logsContextDatasourceUi=true 14:29:26 grafana | logger=sqlstore t=2024-07-03T14:26:52.221039168Z level=info msg="Connecting to DB" dbtype=sqlite3 14:29:26 grafana | logger=sqlstore t=2024-07-03T14:26:52.221187351Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.22497914Z level=info msg="Locking database" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.224992811Z level=info msg="Starting DB migrations" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.225761407Z level=info msg="Executing migration" id="create migration_log table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.226705747Z level=info msg="Migration successfully executed" id="create migration_log table" duration=944.02µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.245337577Z level=info msg="Executing migration" id="create user table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.246637825Z level=info msg="Migration successfully executed" id="create user table" duration=1.299858ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.257104893Z level=info msg="Executing migration" id="add unique index user.login" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.25838226Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.276287ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.267170594Z level=info msg="Executing migration" id="add unique index user.email" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.268058482Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=892.148µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.274141651Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.274783444Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=644.153µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.280073585Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.281032625Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=959.61µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.288693315Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.292159338Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.481313ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.296199772Z level=info msg="Executing migration" id="create user table v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.29701028Z level=info msg="Migration successfully executed" id="create user table v2" duration=811.128µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.306630411Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.307612161Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=986µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.31993627Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.32092932Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=997.9µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.332547523Z level=info msg="Executing migration" id="copy data_source v1 to v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.333018054Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=474.051µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.339423288Z level=info msg="Executing migration" id="Drop old table user_v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.34000848Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=585.872µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.342969142Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.344024024Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.054622ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.350853268Z level=info msg="Executing migration" id="Update user table charset" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.350894848Z level=info msg="Migration successfully executed" id="Update user table charset" duration=39.671µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.356011276Z level=info msg="Executing migration" id="Add last_seen_at column to user" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.357468566Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.45877ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.361216045Z level=info msg="Executing migration" id="Add missing user data" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.3614499Z level=info msg="Migration successfully executed" id="Add missing user data" duration=234.435µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.366659219Z level=info msg="Executing migration" id="Add is_disabled column to user" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.367863044Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.204545ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.374288308Z level=info msg="Executing migration" id="Add index user.login/user.email" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.375131877Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=845.499µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.385727528Z level=info msg="Executing migration" id="Add is_service_account column to user" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.387130918Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.40697ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.391148642Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.399046887Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.897305ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.408007925Z level=info msg="Executing migration" id="Add uid column to user" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.40918454Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.178415ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.41349207Z level=info msg="Executing migration" id="Update uid column values for users" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.413704874Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=212.064µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.416728527Z level=info msg="Executing migration" id="Add unique index user_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.41729989Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=572.163µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.421042958Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.421273773Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=230.735µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.426724037Z level=info msg="Executing migration" id="update login and email fields to lowercase" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.427042114Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=318.047µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.433747374Z level=info msg="Executing migration" id="update login and email fields to lowercase2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.434044831Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=305.868µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.443936848Z level=info msg="Executing migration" id="create temp user table v1-7" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.445019331Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.085443ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.452590319Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.453646922Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.058283ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.46076459Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.461413954Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=654.984µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.46649149Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.467034081Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=542.871µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.474179892Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.475154272Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=974.041µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.480616476Z level=info msg="Executing migration" id="Update temp_user table charset" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.480663287Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=47.521µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.486087381Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.487329447Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.235356ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.491336241Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.491964164Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=627.243µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.500814179Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.501389502Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=576.873µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.509698436Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.510277448Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=581.292µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.516515348Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.51898779Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.474312ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.524567087Z level=info msg="Executing migration" id="create temp_user v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.525477426Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=910.649µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.531178115Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.531993973Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=815.418µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.539344116Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.53999033Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=649.764µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.547368374Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.547959227Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=591.733µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.551873949Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.552464932Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=586.052µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.559717713Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.560099612Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=381.969µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.566474765Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.567276322Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=801.016µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.573619784Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.574148176Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=527.832µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.579716753Z level=info msg="Executing migration" id="create star table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.580441278Z level=info msg="Migration successfully executed" id="create star table" duration=724.685µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.590038168Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.591361956Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.314488ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.595959162Z level=info msg="Executing migration" id="create org table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.596854922Z level=info msg="Migration successfully executed" id="create org table v1" duration=896.17µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.601029469Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.601805805Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=776.006µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.607141356Z level=info msg="Executing migration" id="create org_user table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.608186419Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.045253ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.614118423Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.61542417Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.305537ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.624192233Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.624978211Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=788.658µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.631452766Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.633084801Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.631885ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.638391522Z level=info msg="Executing migration" id="Update org table charset" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.638428673Z level=info msg="Migration successfully executed" id="Update org table charset" duration=38.251µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.643245513Z level=info msg="Executing migration" id="Update org_user table charset" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.643267913Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=23.11µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.648987373Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.649183587Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=195.774µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.655551971Z level=info msg="Executing migration" id="create dashboard table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.656291246Z level=info msg="Migration successfully executed" id="create dashboard table" duration=738.555µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.660653757Z level=info msg="Executing migration" id="add index dashboard.account_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.661431214Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=771.677µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.667615744Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.668493062Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=876.688µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.673693741Z level=info msg="Executing migration" id="create dashboard_tag table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.674997559Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.303148ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.681885263Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.683364454Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.483011ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.690768699Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.691608627Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=840.589µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.695464637Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.701594006Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.125778ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.710082733Z level=info msg="Executing migration" id="create dashboard v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.710976192Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=897.169µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.717230684Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.718252875Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.022431ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.723076636Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.724040216Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=964.94µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.729819857Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.730185694Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=366.437µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.734026376Z level=info msg="Executing migration" id="drop table dashboard_v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.73659642Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=2.563223ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.741307057Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.741554832Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=248.685µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.749102941Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.751288707Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.185726ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.756173669Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.75816273Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.988761ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.761532302Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.763681946Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.157165ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.768897986Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.770150942Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.252626ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.779426446Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.781356067Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.923281ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.786104836Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.78769086Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.591803ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.793277787Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.794063103Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=784.656µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.798151789Z level=info msg="Executing migration" id="Update dashboard table charset" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.79818536Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=34.091µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.802066781Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.802104551Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=38.61µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.808931014Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.812490509Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.560165ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.815313269Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.81731374Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.999941ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.820116798Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.821987348Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.86454ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.826169536Z level=info msg="Executing migration" id="Add column uid in dashboard" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.828018074Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.848148ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.830459486Z level=info msg="Executing migration" id="Update uid column values in dashboard" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.830701661Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=242.235µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.832905927Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.834275785Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.369918ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.839487694Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.84067652Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.188986ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.843609691Z level=info msg="Executing migration" id="Update dashboard title length" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.843630582Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=21.641µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.846480531Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.847283368Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=802.507µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.85167145Z level=info msg="Executing migration" id="create dashboard_provisioning" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.852789833Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.117793ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.855697314Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.862540158Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.838564ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.865416188Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.866265896Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=849.378µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.868896461Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.869462622Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=566.121µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.873460626Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.874037459Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=576.693µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.876627773Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.877126163Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=498.3µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.880142637Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.880970924Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=828.037µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.885597541Z level=info msg="Executing migration" id="Add check_sum column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.887662594Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.064753ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.890296319Z level=info msg="Executing migration" id="Add index for dashboard_title" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.891326651Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.030152ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.894344844Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.89462356Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=278.526µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.899319308Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.899440551Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=121.623µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.901878422Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.902406413Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=527.881µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.904846034Z level=info msg="Executing migration" id="Add isPublic for dashboard" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.906342285Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.496021ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.910454212Z level=info msg="Executing migration" id="Add deleted for dashboard" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.912667237Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.212755ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.915674131Z level=info msg="Executing migration" id="Add index for deleted" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.916644381Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=970.23µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.920003861Z level=info msg="Executing migration" id="create data_source table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.925879704Z level=info msg="Migration successfully executed" id="create data_source table" duration=5.874863ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.931538243Z level=info msg="Executing migration" id="add index data_source.account_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.932225197Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=694.274µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.934888793Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.935803533Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=914.62µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.938633732Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.939800266Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.166304ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.945174789Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.946049617Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=873.868µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.948671132Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.955031686Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.360164ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.958007968Z level=info msg="Executing migration" id="create data_source table v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.95907195Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.063342ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.96383985Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.964413952Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=573.902µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.966773291Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.967592268Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=816.737µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.971291756Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.972124804Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=833.959µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.977302032Z level=info msg="Executing migration" id="Add column with_credentials" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.979565519Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.262687ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.982595912Z level=info msg="Executing migration" id="Add secure json data column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.985250448Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.654136ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.988098818Z level=info msg="Executing migration" id="Update data_source table charset" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.988128418Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=30.06µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.993691825Z level=info msg="Executing migration" id="Update initial version to 1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.993880679Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=188.994µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.996233438Z level=info msg="Executing migration" id="Add read_only data column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:52.998812052Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.577144ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.001754064Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.001954598Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=207.814µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.004777229Z level=info msg="Executing migration" id="Update json_data with nulls" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.005016755Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=239.415µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.013775062Z level=info msg="Executing migration" id="Add uid column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.016170077Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.394325ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.024462198Z level=info msg="Executing migration" id="Update uid value" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.024744065Z level=info msg="Migration successfully executed" id="Update uid value" duration=282.017µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.032489604Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.034366168Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.855463ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.044350128Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.045159248Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=810.91µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.054026163Z level=info msg="Executing migration" id="Add is_prunable column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.05953611Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=5.508137ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.067631867Z level=info msg="Executing migration" id="Add api_version column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.070097755Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.465817ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.078140341Z level=info msg="Executing migration" id="create api_key table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.078880578Z level=info msg="Migration successfully executed" id="create api_key table" duration=742.166µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.115912054Z level=info msg="Executing migration" id="add index api_key.account_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.120209724Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=4.296549ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.125522235Z level=info msg="Executing migration" id="add index api_key.key" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.126651462Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.134217ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.131993416Z level=info msg="Executing migration" id="add index api_key.account_id_name" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.132790264Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=796.988µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.135756072Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.13652831Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=770.108µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.142718504Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.14342726Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=708.646µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.145748914Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.14643425Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=685.216µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.149597063Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.157962936Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.365053ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.164267782Z level=info msg="Executing migration" id="create api_key table v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.16633523Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=2.072879ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.170600949Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.172353689Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.75789ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.176339071Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.17715809Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=818.069µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.179990166Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.180848885Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=858.939µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.184782457Z level=info msg="Executing migration" id="copy api_key v1 to v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.185089224Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=307.907µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.188955593Z level=info msg="Executing migration" id="Drop old table api_key_v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.189806242Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=850.209µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.194707726Z level=info msg="Executing migration" id="Update api_key table charset" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.194755837Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=48.061µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.19836794Z level=info msg="Executing migration" id="Add expires to api_key table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.200928759Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.558029ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.205662349Z level=info msg="Executing migration" id="Add service account foreign key" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.208146717Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.484158ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.314017465Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.314353103Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=338.688µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.414293804Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.418670535Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.379091ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.423833495Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.426344873Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.511339ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.429986347Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.430700053Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=713.436µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.434123063Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.434616924Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=495.141µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.440157382Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.441426441Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.266039ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.446232663Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.447546312Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.313ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.451930084Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.453250914Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.32075ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.458300571Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.459687304Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.386483ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.464800262Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.464865184Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=65.482µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.468323224Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.468396825Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=75.772µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.475266484Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.480002493Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.736229ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.484331294Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.486945744Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.613959ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.49021323Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.490278281Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=64.351µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.492836171Z level=info msg="Executing migration" id="create quota table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.493525726Z level=info msg="Migration successfully executed" id="create quota table v1" duration=689.645µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.498063791Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.49931909Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.254569ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.502994275Z level=info msg="Executing migration" id="Update quota table charset" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.503031626Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=38.301µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.506451365Z level=info msg="Executing migration" id="create plugin_setting table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.507189432Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=736.267µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.511540112Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.512426554Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=885.852µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.516724323Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.521250148Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.525695ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.525854053Z level=info msg="Executing migration" id="Update plugin_setting table charset" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.525875574Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=22.181µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.533760126Z level=info msg="Executing migration" id="create session table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.535705152Z level=info msg="Migration successfully executed" id="create session table" duration=1.945006ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.539253413Z level=info msg="Executing migration" id="Drop old table playlist table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.539443408Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=190.475µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.54386566Z level=info msg="Executing migration" id="Drop old table playlist_item table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.543946932Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=81.652µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.54689304Z level=info msg="Executing migration" id="create playlist table v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.547595226Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=702.686µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.551537818Z level=info msg="Executing migration" id="create playlist item table v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.552771896Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.233448ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.584797166Z level=info msg="Executing migration" id="Update playlist table charset" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.584845368Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=47.311µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.645946181Z level=info msg="Executing migration" id="Update playlist_item table charset" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.645988812Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=45.911µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.648895029Z level=info msg="Executing migration" id="Add playlist column created_at" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.653350452Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.456383ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.657395695Z level=info msg="Executing migration" id="Add playlist column updated_at" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.660405065Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.00875ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.663522167Z level=info msg="Executing migration" id="drop preferences table v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.6635963Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=74.553µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.666279171Z level=info msg="Executing migration" id="drop preferences table v3" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.666355602Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=76.461µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.670436047Z level=info msg="Executing migration" id="create preferences table v3" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.671915861Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.480994ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.675262779Z level=info msg="Executing migration" id="Update preferences table charset" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.67530096Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=39.861µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.678590226Z level=info msg="Executing migration" id="Add column team_id in preferences" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.681707138Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.116032ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.711560159Z level=info msg="Executing migration" id="Update team_id column values in preferences" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.711798134Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=236.155µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.71509096Z level=info msg="Executing migration" id="Add column week_start in preferences" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.719960043Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=4.870063ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.72545724Z level=info msg="Executing migration" id="Add column preferences.json_data" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.728549771Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.091831ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.74449789Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.744570501Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=73.031µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.77996358Z level=info msg="Executing migration" id="Add preferences index org_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.780752158Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=788.438µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.783908811Z level=info msg="Executing migration" id="Add preferences index user_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.78470346Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=793.399µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.788012296Z level=info msg="Executing migration" id="create alert table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.78904793Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.036624ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.793137995Z level=info msg="Executing migration" id="add index alert org_id & id " 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.793908023Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=769.248µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.797124847Z level=info msg="Executing migration" id="add index alert state" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.797904415Z level=info msg="Migration successfully executed" id="add index alert state" duration=779.358µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.801398286Z level=info msg="Executing migration" id="add index alert dashboard_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.802173484Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=774.828µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.806066473Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.806687109Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=619.975µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.809938074Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.811305675Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.367162ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.814705573Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.816024824Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.318831ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.820049187Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.830912708Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.857341ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.840667714Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.841271948Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=605.034µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.844785029Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.846045759Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.26098ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.851394372Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.851701089Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=306.827µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.854606557Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.855141419Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=534.372µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.858176889Z level=info msg="Executing migration" id="create alert_notification table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.859307615Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.130996ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.863760248Z level=info msg="Executing migration" id="Add column is_default" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.86901237Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.252952ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.872534741Z level=info msg="Executing migration" id="Add column frequency" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.876261727Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.724546ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.879437301Z level=info msg="Executing migration" id="Add column send_reminder" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.883121916Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.684775ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.887060857Z level=info msg="Executing migration" id="Add column disable_resolve_message" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.890828244Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.766338ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.894298194Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.8958674Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.570126ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.900318743Z level=info msg="Executing migration" id="Update alert table charset" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.900357124Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=42.631µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.906145858Z level=info msg="Executing migration" id="Update alert_notification table charset" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.906206029Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=69.951µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.910908959Z level=info msg="Executing migration" id="create notification_journal table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.914396059Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=3.484301ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.931550275Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.9330222Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.475565ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.936565472Z level=info msg="Executing migration" id="drop alert_notification_journal" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.937699397Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.133365ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.941123487Z level=info msg="Executing migration" id="create alert_notification_state table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.942051399Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=927.142µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.94646668Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.947513265Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.046505ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.956460392Z level=info msg="Executing migration" id="Add for to alert table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.965179533Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=8.715621ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.970173179Z level=info msg="Executing migration" id="Add column uid in alert_notification" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.973865944Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.692235ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.980076507Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.980502218Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=426.551µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.984023949Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.985070593Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.046594ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.990466908Z level=info msg="Executing migration" id="Remove unique index org_id_name" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.991432531Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=970.213µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.994321938Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:53.998139306Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.816858ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.001265858Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.001334229Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=69.651µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.006113834Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.00695653Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=844.156µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.010078704Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.010884241Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=805.396µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.015087529Z level=info msg="Executing migration" id="Drop old annotation table v4" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.015174361Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=88.212µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.019603235Z level=info msg="Executing migration" id="create annotation table v5" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.02180832Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=2.237376ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.029030732Z level=info msg="Executing migration" id="add index annotation 0 v3" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.029706296Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=675.354µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.032962044Z level=info msg="Executing migration" id="add index annotation 1 v3" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.033609989Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=648.385µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.03937406Z level=info msg="Executing migration" id="add index annotation 2 v3" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.040919831Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.546021ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.045714432Z level=info msg="Executing migration" id="add index annotation 3 v3" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.047440089Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.722307ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.05277913Z level=info msg="Executing migration" id="add index annotation 4 v3" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.055110449Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=2.335819ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.063020695Z level=info msg="Executing migration" id="Update annotation table charset" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.063057756Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=36.931µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.068858958Z level=info msg="Executing migration" id="Add column region_id to annotation table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.072175747Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.321439ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.075313633Z level=info msg="Executing migration" id="Drop category_id index" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.076491238Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.180775ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.084763381Z level=info msg="Executing migration" id="Add column tags to annotation table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.0894677Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.700498ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.095189499Z level=info msg="Executing migration" id="Create annotation_tag table v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.096015236Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=826.227µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.103582024Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.10484985Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.266896ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.108244291Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.109280693Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.036302ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.116662447Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.127269149Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=10.606622ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.13021901Z level=info msg="Executing migration" id="Create annotation_tag table v3" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.130749831Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=530.781µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.134777906Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.135777366Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=999.15µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.138898181Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.139208597Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=310.986µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.142411274Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.142933656Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=521.942µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.147120793Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.147318847Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=197.894µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.152411584Z level=info msg="Executing migration" id="Add created time to annotation table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.15751197Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.093127ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.16421886Z level=info msg="Executing migration" id="Add updated time to annotation table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.168640482Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.422702ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.174095547Z level=info msg="Executing migration" id="Add index for created in annotation table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.175075667Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=979.43µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.182459251Z level=info msg="Executing migration" id="Add index for updated in annotation table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.183467852Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.012261ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.186659588Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.186895023Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=235.405µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.192303766Z level=info msg="Executing migration" id="Add epoch_end column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.197092976Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.79454ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.202493239Z level=info msg="Executing migration" id="Add index for epoch_end" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.203658043Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.168204ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.206599795Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.206760238Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=157.463µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.210809223Z level=info msg="Executing migration" id="Move region to single row" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.21115982Z level=info msg="Migration successfully executed" id="Move region to single row" duration=351.057µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.221321632Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.222442605Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.130013ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.227565383Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.228502202Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=935.159µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.231841322Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.233311063Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.464491ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.240190616Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.241081755Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=890.279µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.304304644Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.305729905Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.425321ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.310428562Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.311773791Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.349799ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.320432892Z level=info msg="Executing migration" id="Increase tags column to length 4096" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.320533614Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=101.383µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.325940976Z level=info msg="Executing migration" id="create test_data table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.32851057Z level=info msg="Migration successfully executed" id="create test_data table" duration=2.569564ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.334677009Z level=info msg="Executing migration" id="create dashboard_version table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.335510896Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=833.557µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.346321572Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.348218402Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.89683ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.356655028Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.358084877Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.429409ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.361600162Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.361889887Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=289.996µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.367357982Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.36776118Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=403.248µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.3754497Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.375587233Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=138.203µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.381434385Z level=info msg="Executing migration" id="create team table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.38309262Z level=info msg="Migration successfully executed" id="create team table" duration=1.656924ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.387695326Z level=info msg="Executing migration" id="add index team.org_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.389439313Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.745067ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.397874079Z level=info msg="Executing migration" id="add unique index team_org_id_name" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.399469641Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.595222ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.40318477Z level=info msg="Executing migration" id="Add column uid in team" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.407814606Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.628555ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.410702406Z level=info msg="Executing migration" id="Update uid column values in team" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.410954691Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=252.045µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.415149509Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.41617051Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.021081ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.422044983Z level=info msg="Executing migration" id="create team member table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.422894731Z level=info msg="Migration successfully executed" id="create team member table" duration=846.248µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.430884577Z level=info msg="Executing migration" id="add index team_member.org_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.432533252Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.647705ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.438948526Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.440118211Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.168675ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.443129294Z level=info msg="Executing migration" id="add index team_member.team_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.444158395Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.029331ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.44869061Z level=info msg="Executing migration" id="Add column email to team table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.453935889Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.240219ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.459109206Z level=info msg="Executing migration" id="Add column external to team_member table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.464273525Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=5.164189ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.470121577Z level=info msg="Executing migration" id="Add column permission to team_member table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.474787624Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.661798ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.480418862Z level=info msg="Executing migration" id="create dashboard acl table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.481394442Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=967.989µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.486627251Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.487751415Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.124054ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.493061156Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.494125608Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.064672ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.502567034Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.504337192Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.771207ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.50957229Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.510612862Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.040812ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.513700887Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.514756818Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.055582ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.518324114Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.519529178Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.205004ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.525398481Z level=info msg="Executing migration" id="add index dashboard_permission" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.527097887Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.700026ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.531164771Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.532217494Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=1.051963ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.539204139Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.539509425Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=305.636µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.545816557Z level=info msg="Executing migration" id="create tag table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.546684466Z level=info msg="Migration successfully executed" id="create tag table" duration=867.689µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.554668282Z level=info msg="Executing migration" id="add index tag.key_value" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.555809246Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.140784ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.561840112Z level=info msg="Executing migration" id="create login attempt table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.564572319Z level=info msg="Migration successfully executed" id="create login attempt table" duration=2.735227ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.567854617Z level=info msg="Executing migration" id="add index login_attempt.username" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.569187305Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.333118ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.572180608Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.573468314Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.287156ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.579445149Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.593912992Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.468093ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.596767972Z level=info msg="Executing migration" id="create login_attempt v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.597515007Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=746.945µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.604547553Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.605680527Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.131844ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.614700535Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.615484841Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=792.556µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.619338682Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.620118148Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=779.166µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.623986299Z level=info msg="Executing migration" id="create user auth table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.624992001Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.005672ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.632144979Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.633149821Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.003852ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.6359944Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.636136263Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=142.563µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.643349173Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.650854531Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.508797ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.656163901Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.661401441Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.23676ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.667380615Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.672437502Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.056306ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.684402221Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.692456349Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=8.050798ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.696780699Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.697659028Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=875.599µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.702405896Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.712781603Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=10.375677ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.718114165Z level=info msg="Executing migration" id="create server_lock table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.718776218Z level=info msg="Migration successfully executed" id="create server_lock table" duration=662.524µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.72648577Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.728738977Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=2.255687ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.732050876Z level=info msg="Executing migration" id="create user auth token table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.733873604Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.823178ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.73943922Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.741067414Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.623653ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.743721989Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.745427025Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.705126ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.747976708Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.748972539Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=996.471µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.754045815Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.761051051Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=7.003416ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.763842739Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.765147697Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.304368ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.768020627Z level=info msg="Executing migration" id="create cache_data table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.7691684Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.147523ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.773580933Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.774482572Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=900.798µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.777201918Z level=info msg="Executing migration" id="create short_url table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.77823492Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.032802ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.781033638Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.781999169Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=964.891µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.785782777Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.785851639Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=69.512µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.787909452Z level=info msg="Executing migration" id="delete alert_definition table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.788004914Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=92.522µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.79257261Z level=info msg="Executing migration" id="recreate alert_definition table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.793475648Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=902.528µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.802604619Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.807393969Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=4.7878ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.810963273Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.812267781Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.304168ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.815012678Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.815071909Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=59.911µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.818498081Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.819219005Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=720.154µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.823344132Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.823990105Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=645.883µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.826621431Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.827308645Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=686.935µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.830680225Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.83141219Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=729.575µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.836072978Z level=info msg="Executing migration" id="Add column paused in alert_definition" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.84193127Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.857713ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.846517286Z level=info msg="Executing migration" id="drop alert_definition table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.847408644Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=888.618µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.851346556Z level=info msg="Executing migration" id="delete alert_definition_version table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.851428649Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=82.393µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.85437893Z level=info msg="Executing migration" id="recreate alert_definition_version table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.855299009Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=919.109µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.85819956Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.859138319Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=938.679µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.862816876Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.863753656Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=936.699µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.866677876Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.866792569Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=115.963µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.870258591Z level=info msg="Executing migration" id="drop alert_definition_version table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.871824394Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.564693ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.876054162Z level=info msg="Executing migration" id="create alert_instance table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.877102864Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.048392ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.883727253Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.885384148Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.656415ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.891187509Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.892512666Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.300737ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.89561404Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.90323141Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=7.6174ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.906881146Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.907762495Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=880.489µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.912512493Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.913399282Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=886.849µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.917952137Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:54.949965776Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=32.006668ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.014490872Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.046893709Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=32.418307ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.052997721Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.054307252Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.310201ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.061435738Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.062954334Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.518836ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.066138538Z level=info msg="Executing migration" id="add current_reason column related to current_state" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.072252571Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.114693ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.076970872Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.082937921Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.966949ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.085835039Z level=info msg="Executing migration" id="create alert_rule table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.086541235Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=706.426µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.089360241Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.090052558Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=691.197µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.097497111Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.098558326Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.065065ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.104559126Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.105644631Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.085095ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.109569344Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.109654336Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=79.611µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.112176465Z level=info msg="Executing migration" id="add column for to alert_rule" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.118744318Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.571944ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.155487286Z level=info msg="Executing migration" id="add column annotations to alert_rule" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.16420395Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=8.719024ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.168631694Z level=info msg="Executing migration" id="add column labels to alert_rule" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.180350557Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=11.715364ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.184644387Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.185640681Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=997.274µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.189193184Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.190208877Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.014813ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.199979616Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.205661468Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=5.678942ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.208910634Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.216686706Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=7.770832ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.223326341Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.225116043Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.792672ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.229923376Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.236295864Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.373118ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.239804576Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.244096387Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.290831ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.248146031Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.248229973Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=87.122µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.251275174Z level=info msg="Executing migration" id="create alert_rule_version table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.252395191Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.125387ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.25791378Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.259209389Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.29988ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.26647982Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.267631126Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.152716ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.271962188Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.272034589Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=73.131µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.274849625Z level=info msg="Executing migration" id="add column for to alert_rule_version" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.281580982Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.732127ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.284632863Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.291779651Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=7.146218ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.29516814Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.302084851Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.916311ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.309210388Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.315445843Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.234885ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.319189921Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.326101792Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.911231ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.329831429Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.329987503Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=130.873µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.333181737Z level=info msg="Executing migration" id=create_alert_configuration_table 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.33414687Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=966.303µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.338557163Z level=info msg="Executing migration" id="Add column default in alert_configuration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.344849631Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.291038ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.34998201Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.350109634Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=127.384µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.352801157Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.359449912Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.648285ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.364854249Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.365924414Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.069774ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.369233031Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.375762094Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.528493ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.380701659Z level=info msg="Executing migration" id=create_ngalert_configuration_table 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.381608551Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=905.692µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.38845277Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.389650849Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.197469ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.397006611Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.403859581Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.858251ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.406686707Z level=info msg="Executing migration" id="create provenance_type table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.407277931Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=591.064µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.410807163Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.411824938Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.017535ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.415732039Z level=info msg="Executing migration" id="create alert_image table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.416551638Z level=info msg="Migration successfully executed" id="create alert_image table" duration=819.519µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.420216294Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.421447102Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.229938ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.426592063Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.426664175Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=73.272µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.434363345Z level=info msg="Executing migration" id=create_alert_configuration_history_table 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.435339637Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=975.582µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.438747607Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.439684439Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=936.362µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.446391486Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.447125093Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.452250253Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.453021101Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=770.608µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.456859991Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.457893165Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.033404ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.461417997Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.470611433Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=9.193936ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.475509647Z level=info msg="Executing migration" id="create library_element table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.476304066Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=795.159µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.479252245Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.480070444Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=817.889µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.48289333Z level=info msg="Executing migration" id="create library_element_connection table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.483718539Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=824.899µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.48802924Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.489106695Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.076675ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.491950052Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.493011557Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.061065ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.496597791Z level=info msg="Executing migration" id="increase max description length to 2048" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.496627351Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=30.92µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.500283337Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.500387539Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=75.062µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.508468898Z level=info msg="Executing migration" id="add library_element folder uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.518512823Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=10.045585ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.522143528Z level=info msg="Executing migration" id="populate library_element folder_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.522509787Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=365.439µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.526313205Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.527386161Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.072146ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.531235101Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.531511037Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=275.986µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.534761073Z level=info msg="Executing migration" id="create data_keys table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.53589044Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.131497ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.539022173Z level=info msg="Executing migration" id="create secrets table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.540120228Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.097515ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.548917545Z level=info msg="Executing migration" id="rename data_keys name column to id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.58591159Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=36.991725ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.589132526Z level=info msg="Executing migration" id="add name column into data_keys" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.594672775Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.538999ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.597998623Z level=info msg="Executing migration" id="copy data_keys id column values into name" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.598146466Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=141.554µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.601864733Z level=info msg="Executing migration" id="rename data_keys name column to label" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.634413545Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=32.549121ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.638360497Z level=info msg="Executing migration" id="rename data_keys id column back to name" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.668552393Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=30.192117ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.673156501Z level=info msg="Executing migration" id="create kv_store table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.673940589Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=783.818µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.681943006Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.683885032Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.940276ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.687533847Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.687833824Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=293.517µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.69364677Z level=info msg="Executing migration" id="create permission table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.694471919Z level=info msg="Migration successfully executed" id="create permission table" duration=825.409µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.700514901Z level=info msg="Executing migration" id="add unique index permission.role_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.702353234Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.835773ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.706244835Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.708325813Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=2.080738ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.713868673Z level=info msg="Executing migration" id="create role table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.715038211Z level=info msg="Migration successfully executed" id="create role table" duration=1.160898ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.719769211Z level=info msg="Executing migration" id="add column display_name" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.728284151Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.51414ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.73595591Z level=info msg="Executing migration" id="add column group_name" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.746058656Z level=info msg="Migration successfully executed" id="add column group_name" duration=10.102076ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.750629074Z level=info msg="Executing migration" id="add index role.org_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.751462573Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=832.839µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.755817574Z level=info msg="Executing migration" id="add unique index role_org_id_name" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.757072454Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.25615ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.761772663Z level=info msg="Executing migration" id="add index role_org_id_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.763706099Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.932386ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.772822842Z level=info msg="Executing migration" id="create team role table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.773797745Z level=info msg="Migration successfully executed" id="create team role table" duration=974.593µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.781815053Z level=info msg="Executing migration" id="add index team_role.org_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.783075332Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.259349ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.786744918Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.787894225Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.148107ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.791455678Z level=info msg="Executing migration" id="add index team_role.team_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.792614066Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.158308ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.796140838Z level=info msg="Executing migration" id="create user role table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.797056079Z level=info msg="Migration successfully executed" id="create user role table" duration=914.901µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.802058326Z level=info msg="Executing migration" id="add index user_role.org_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.803219933Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.160887ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.806560361Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.807841532Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.28506ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.811075247Z level=info msg="Executing migration" id="add index user_role.user_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.812271326Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.195709ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.817231411Z level=info msg="Executing migration" id="create builtin role table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.818170774Z level=info msg="Migration successfully executed" id="create builtin role table" duration=935.633µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.827022141Z level=info msg="Executing migration" id="add index builtin_role.role_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.82827519Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.253599ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.831361402Z level=info msg="Executing migration" id="add index builtin_role.name" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.832415097Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.053615ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.837984207Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.84708836Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.102163ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.850383997Z level=info msg="Executing migration" id="add index builtin_role.org_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.851493143Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.108656ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.854926553Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.857084394Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=2.155871ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.862416758Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.86419075Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.772612ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.868847829Z level=info msg="Executing migration" id="add unique index role.uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.870518597Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.670698ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.874834209Z level=info msg="Executing migration" id="create seed assignment table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.875642118Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=808.099µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.87871918Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.879775775Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.055255ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.882606651Z level=info msg="Executing migration" id="add column hidden to role table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.890911765Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.304504ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.896167498Z level=info msg="Executing migration" id="permission kind migration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.904324398Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.157001ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.910019442Z level=info msg="Executing migration" id="permission attribute migration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.917276472Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=7.25545ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.919817241Z level=info msg="Executing migration" id="permission identifier migration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.927924861Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.10621ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.932794995Z level=info msg="Executing migration" id="add permission identifier index" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.933585543Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=790.658µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.93819025Z level=info msg="Executing migration" id="add permission action scope role_id index" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.939735647Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.543167ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.94586263Z level=info msg="Executing migration" id="remove permission role_id action scope index" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.9475307Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.66804ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.95521445Z level=info msg="Executing migration" id="create query_history table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.956782736Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.567436ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.960027952Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.961712901Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.684329ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.964752172Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.964817264Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=65.142µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.969743149Z level=info msg="Executing migration" id="rbac disabled migrator" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.969801001Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=58.542µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.973829405Z level=info msg="Executing migration" id="teams permissions migration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.974464339Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=634.264µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.977808368Z level=info msg="Executing migration" id="dashboard permissions" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.978661918Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=858.25µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.981992725Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.982576349Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=583.484µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.989213825Z level=info msg="Executing migration" id="drop managed folder create actions" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.989481491Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=266.926µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:55.999896774Z level=info msg="Executing migration" id="alerting notification permissions" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.000736534Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=839.3µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.003881568Z level=info msg="Executing migration" id="create query_history_star table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.00529026Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.409672ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.008099186Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.009272254Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.172477ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.015942979Z level=info msg="Executing migration" id="add column org_id in query_history_star" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.027802855Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=11.860196ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.035846463Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.035913305Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=67.322µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.042206033Z level=info msg="Executing migration" id="create correlation table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.043754378Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.547365ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.049178955Z level=info msg="Executing migration" id="add index correlations.uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.050929156Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.748441ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.055948523Z level=info msg="Executing migration" id="add index correlations.source_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.057044239Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.091556ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.061488862Z level=info msg="Executing migration" id="add correlation config column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.072137301Z level=info msg="Migration successfully executed" id="add correlation config column" duration=10.648409ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.077269711Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.078051469Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=781.158µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.080529217Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.081614742Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.084946ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.088620126Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.11276512Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=24.150925ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.116611709Z level=info msg="Executing migration" id="create correlation v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.11752544Z level=info msg="Migration successfully executed" id="create correlation v2" duration=913.641µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.122038566Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.123297326Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.25684ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.128702091Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.129809558Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.107357ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.132536502Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.133647847Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.110605ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.137321604Z level=info msg="Executing migration" id="copy correlation v1 to v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.137558139Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=236.576µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.142287488Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.143644071Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.352322ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.148469684Z level=info msg="Executing migration" id="add provisioning column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.158857916Z level=info msg="Migration successfully executed" id="add provisioning column" duration=10.388312ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.162095761Z level=info msg="Executing migration" id="create entity_events table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.162893121Z level=info msg="Migration successfully executed" id="create entity_events table" duration=794.229µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.167946848Z level=info msg="Executing migration" id="create dashboard public config v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.168946462Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=998.074µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.172532705Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.173096548Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.176098468Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.176665372Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.183364658Z level=info msg="Executing migration" id="Drop old dashboard public config table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.184059874Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=694.246µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.189617264Z level=info msg="Executing migration" id="recreate dashboard public config v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.191054508Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.433734ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.194475518Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.195956672Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.481154ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.202412843Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.203331244Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=918.761µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.206003286Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.206833665Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=829.729µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.210472811Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.211753111Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.27808ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.218626922Z level=info msg="Executing migration" id="Drop public config table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.219655515Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.033453ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.227048179Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.228249096Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.200607ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.231185474Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.232255409Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.069785ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.238327152Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.239437417Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.109715ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.243931042Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.246195505Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=2.265412ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.249851651Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.270282767Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=20.427416ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.275665973Z level=info msg="Executing migration" id="add annotations_enabled column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.284469559Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.807646ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.290380616Z level=info msg="Executing migration" id="add time_selection_enabled column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.297280598Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.899342ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.300200736Z level=info msg="Executing migration" id="delete orphaned public dashboards" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.300448512Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=248.126µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.304077946Z level=info msg="Executing migration" id="add share column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.312510993Z level=info msg="Migration successfully executed" id="add share column" duration=8.432367ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.316182519Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.316384883Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=202.144µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.321657397Z level=info msg="Executing migration" id="create file table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.322660781Z level=info msg="Migration successfully executed" id="create file table" duration=1.004854ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.329177782Z level=info msg="Executing migration" id="file table idx: path natural pk" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.331364314Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=2.185301ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.334613449Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.336595626Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.980997ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.339655367Z level=info msg="Executing migration" id="create file_meta table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.340404725Z level=info msg="Migration successfully executed" id="create file_meta table" duration=749.168µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.34489464Z level=info msg="Executing migration" id="file table idx: path key" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.345829571Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=934.351µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.351649167Z level=info msg="Executing migration" id="set path collation in file table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.351830491Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=180.384µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.355254111Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.355424785Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=169.284µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.361458037Z level=info msg="Executing migration" id="managed permissions migration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.362383618Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=926.892µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.365327757Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.365627693Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=299.377µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.369141675Z level=info msg="Executing migration" id="RBAC action name migrator" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.373119479Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=3.975924ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.377436809Z level=info msg="Executing migration" id="Add UID column to playlist" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.38905304Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=11.615451ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.393811021Z level=info msg="Executing migration" id="Update uid column values in playlist" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.394014926Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=203.405µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.399903594Z level=info msg="Executing migration" id="Add index for uid in playlist" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.400930868Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.027195ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.404534962Z level=info msg="Executing migration" id="update group index for alert rules" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.404935131Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=392.689µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.411834473Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.412132769Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=299.816µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.416941451Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.417908704Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=967.003µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.422059161Z level=info msg="Executing migration" id="add action column to seed_assignment" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.432103116Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=10.043275ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.435382842Z level=info msg="Executing migration" id="add scope column to seed_assignment" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.443180934Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.797592ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.450461284Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.451622791Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.161377ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.458910101Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.535851488Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=76.941477ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.539143735Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.540071226Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=926.741µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.544179903Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.545650926Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.469864ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.550045509Z level=info msg="Executing migration" id="add primary key to seed_assigment" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.576606029Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=26.55122ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.582124268Z level=info msg="Executing migration" id="add origin column to seed_assignment" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.588509898Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.38662ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.591933768Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.592225284Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=290.986µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.596949055Z level=info msg="Executing migration" id="prevent seeding OnCall access" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.597130249Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=180.854µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.603282592Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.603499557Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=214.675µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.611968375Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.61217417Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=206.015µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.618048407Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.618253942Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=204.285µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.622274046Z level=info msg="Executing migration" id="create folder table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.623063074Z level=info msg="Migration successfully executed" id="create folder table" duration=788.928µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.628070991Z level=info msg="Executing migration" id="Add index for parent_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.629077804Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.005563ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.633829686Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.634816158Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=985.452µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.638522585Z level=info msg="Executing migration" id="Update folder title length" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.638545315Z level=info msg="Migration successfully executed" id="Update folder title length" duration=22.33µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.645295033Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.646236105Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=940.932µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.651264633Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.652115532Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=832.659µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.655095612Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.655978143Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=882.101µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.660049407Z level=info msg="Executing migration" id="Sync dashboard and folder table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.660415546Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=365.999µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.663421876Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.663655541Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=233.555µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.667656485Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.668496695Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=839.79µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.674238439Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.675836147Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.596968ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.679125413Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.68030537Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.179947ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.686717511Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.68800144Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.283809ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.692902415Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.694703047Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.799802ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.698153128Z level=info msg="Executing migration" id="create anon_device table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.699945239Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.788922ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.704171618Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.70557374Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.402082ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.711259753Z level=info msg="Executing migration" id="add index anon_device.updated_at" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.713401933Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.14142ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.719175498Z level=info msg="Executing migration" id="create signing_key table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.720219973Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.043895ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.72526607Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.726880038Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.610338ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.730977303Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.732839327Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.861884ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.73721175Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.737601229Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=390.459µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.742198405Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.752149988Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.950773ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.757228697Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.757840651Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=612.384µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.764096107Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.764123907Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=28.6µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.76762993Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.769410871Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.779991ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.774559691Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.774577542Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=21.881µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.778675187Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.781181816Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.504879ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.784803861Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.786147182Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.342661ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.796610166Z level=info msg="Executing migration" id="create sso_setting table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.797876745Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.266049ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.802067184Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.803421935Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.355961ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.807229375Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.80787544Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=648.814µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.812759454Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.81344909Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=688.826µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.816527772Z level=info msg="Executing migration" id="create cloud_migration table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.817670228Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.142017ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.820837842Z level=info msg="Executing migration" id="create cloud_migration_run table v1" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.822319616Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.478034ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.826436593Z level=info msg="Executing migration" id="add stack_id column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.836129319Z level=info msg="Migration successfully executed" id="add stack_id column" duration=9.692146ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.844223918Z level=info msg="Executing migration" id="add region_slug column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.855607734Z level=info msg="Migration successfully executed" id="add region_slug column" duration=11.378906ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.859619377Z level=info msg="Executing migration" id="add cluster_slug column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.868011653Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=8.388776ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.871400553Z level=info msg="Executing migration" id="add migration uid column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.87811917Z level=info msg="Migration successfully executed" id="add migration uid column" duration=6.718327ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.884012827Z level=info msg="Executing migration" id="Update uid column values for migration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.884261532Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=246.735µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.887642511Z level=info msg="Executing migration" id="Add unique index migration_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.889854173Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=2.210852ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.8943915Z level=info msg="Executing migration" id="add migration run uid column" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.906185865Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=11.794456ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.910342772Z level=info msg="Executing migration" id="Update uid column values for migration run" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.910523517Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=180.596µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.915947872Z level=info msg="Executing migration" id="Add unique index migration_run_uid" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.918179964Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=2.231492ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.923243383Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.923345145Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=101.392µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.927514153Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.937635939Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=10.088365ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.943825433Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.953707604Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.881201ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.963487842Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.963942184Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=452.382µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.969033322Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.969547975Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=518.023µs 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.973756453Z level=info msg="Executing migration" id="add record column to alert_rule table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.986232324Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=12.475962ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:56.990255208Z level=info msg="Executing migration" id="add record column to alert_rule_version table" 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:57.001559482Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=11.302034ms 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:57.007071311Z level=info msg="migrations completed" performed=572 skipped=0 duration=4.781336875s 14:29:26 grafana | logger=migrator t=2024-07-03T14:26:57.007656185Z level=info msg="Unlocking database" 14:29:26 grafana | logger=sqlstore t=2024-07-03T14:26:57.023060098Z level=info msg="Created default admin" user=admin 14:29:26 grafana | logger=sqlstore t=2024-07-03T14:26:57.023326704Z level=info msg="Created default organization" 14:29:26 grafana | logger=secrets t=2024-07-03T14:26:57.029497588Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 14:29:26 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-07-03T14:26:57.084643564Z level=info msg="Restored cache from database" duration=603.355µs 14:29:26 grafana | logger=plugin.store t=2024-07-03T14:26:57.088277689Z level=info msg="Loading plugins..." 14:29:26 grafana | logger=plugins.registration t=2024-07-03T14:26:57.123788383Z level=error msg="Could not register plugin" pluginId=xychart error="plugin xychart is already registered" 14:29:26 grafana | logger=plugins.initialization t=2024-07-03T14:26:57.123872445Z level=error msg="Could not initialize plugin" pluginId=xychart error="plugin xychart is already registered" 14:29:26 grafana | logger=local.finder t=2024-07-03T14:26:57.123975448Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 14:29:26 grafana | logger=plugin.store t=2024-07-03T14:26:57.124015829Z level=info msg="Plugins loaded" count=54 duration=35.74081ms 14:29:26 grafana | logger=query_data t=2024-07-03T14:26:57.128071674Z level=info msg="Query Service initialization" 14:29:26 grafana | logger=live.push_http t=2024-07-03T14:26:57.131472323Z level=info msg="Live Push Gateway initialization" 14:29:26 grafana | logger=ngalert.notifier.alertmanager org=1 t=2024-07-03T14:26:57.139138463Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 14:29:26 grafana | logger=ngalert.state.manager t=2024-07-03T14:26:57.147945701Z level=info msg="Running in alternative execution of Error/NoData mode" 14:29:26 grafana | logger=infra.usagestats.collector t=2024-07-03T14:26:57.151356211Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 14:29:26 grafana | logger=provisioning.datasources t=2024-07-03T14:26:57.154631948Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 14:29:26 grafana | logger=provisioning.alerting t=2024-07-03T14:26:57.179422051Z level=info msg="starting to provision alerting" 14:29:26 grafana | logger=provisioning.alerting t=2024-07-03T14:26:57.179447711Z level=info msg="finished to provision alerting" 14:29:26 grafana | logger=grafanaStorageLogger t=2024-07-03T14:26:57.179582234Z level=info msg="Storage starting" 14:29:26 grafana | logger=ngalert.state.manager t=2024-07-03T14:26:57.180181268Z level=info msg="Warming state cache for startup" 14:29:26 grafana | logger=provisioning.dashboard t=2024-07-03T14:26:57.180591688Z level=info msg="starting to provision dashboards" 14:29:26 grafana | logger=http.server t=2024-07-03T14:26:57.1823796Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 14:29:26 grafana | logger=ngalert.multiorg.alertmanager t=2024-07-03T14:26:57.197139637Z level=info msg="Starting MultiOrg Alertmanager" 14:29:26 grafana | logger=ngalert.state.manager t=2024-07-03T14:26:57.23344413Z level=info msg="State cache has been initialized" states=0 duration=53.256872ms 14:29:26 grafana | logger=ngalert.scheduler t=2024-07-03T14:26:57.233564333Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 14:29:26 grafana | logger=ticker t=2024-07-03T14:26:57.233677665Z level=info msg=starting first_tick=2024-07-03T14:27:00Z 14:29:26 grafana | logger=plugins.update.checker t=2024-07-03T14:26:57.267322916Z level=info msg="Update check succeeded" duration=87.344783ms 14:29:26 grafana | logger=grafana.update.checker t=2024-07-03T14:26:57.271878503Z level=info msg="Update check succeeded" duration=92.217427ms 14:29:26 grafana | logger=sqlstore.transactions t=2024-07-03T14:26:57.295243931Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 14:29:26 grafana | logger=sqlstore.transactions t=2024-07-03T14:26:57.323950736Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 14:29:26 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-07-03T14:26:57.35776542Z level=info msg="Patterns update finished" duration=109.073433ms 14:29:26 grafana | logger=provisioning.dashboard t=2024-07-03T14:26:57.590733544Z level=info msg="finished to provision dashboards" 14:29:26 grafana | logger=grafana-apiserver t=2024-07-03T14:26:57.617124364Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 14:29:26 grafana | logger=grafana-apiserver t=2024-07-03T14:26:57.617750199Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 14:29:26 grafana | logger=infra.usagestats t=2024-07-03T14:28:41.196433604Z level=info msg="Usage stats are ready to report" 14:29:26 =================================== 14:29:26 ======== Logs from kafka ======== 14:29:26 kafka | ===> User 14:29:26 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 14:29:26 kafka | ===> Configuring ... 14:29:26 kafka | Running in Zookeeper mode... 14:29:26 kafka | ===> Running preflight checks ... 14:29:26 kafka | ===> Check if /var/lib/kafka/data is writable ... 14:29:26 kafka | ===> Check if Zookeeper is healthy ... 14:29:26 kafka | [2024-07-03 14:26:57,304] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,305] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,305] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,305] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,305] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,305] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,305] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,305] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,305] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,305] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,305] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,305] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,305] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,305] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,305] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,306] INFO Client environment:os.memory.free=494MB (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,306] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,306] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,308] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@61d47554 (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,312] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 14:29:26 kafka | [2024-07-03 14:26:57,316] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 14:29:26 kafka | [2024-07-03 14:26:57,323] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 14:29:26 kafka | [2024-07-03 14:26:57,338] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 14:29:26 kafka | [2024-07-03 14:26:57,338] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 14:29:26 kafka | [2024-07-03 14:26:57,344] INFO Socket connection established, initiating session, client: /172.17.0.8:45326, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 14:29:26 kafka | [2024-07-03 14:26:57,378] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000029b7990000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 14:29:26 kafka | [2024-07-03 14:26:57,508] INFO Session: 0x1000029b7990000 closed (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:57,508] INFO EventThread shut down for session: 0x1000029b7990000 (org.apache.zookeeper.ClientCnxn) 14:29:26 kafka | Using log4j config /etc/kafka/log4j.properties 14:29:26 kafka | ===> Launching ... 14:29:26 kafka | ===> Launching kafka ... 14:29:26 kafka | [2024-07-03 14:26:58,197] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 14:29:26 kafka | [2024-07-03 14:26:58,512] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 14:29:26 kafka | [2024-07-03 14:26:58,580] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 14:29:26 kafka | [2024-07-03 14:26:58,581] INFO starting (kafka.server.KafkaServer) 14:29:26 kafka | [2024-07-03 14:26:58,581] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 14:29:26 kafka | [2024-07-03 14:26:58,593] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 14:29:26 kafka | [2024-07-03 14:26:58,596] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,596] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,597] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,597] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,597] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,597] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,597] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,597] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,597] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,597] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,597] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,597] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,597] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,597] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,597] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,598] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,598] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,598] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,599] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@66746f57 (org.apache.zookeeper.ZooKeeper) 14:29:26 kafka | [2024-07-03 14:26:58,603] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 14:29:26 kafka | [2024-07-03 14:26:58,608] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 14:29:26 kafka | [2024-07-03 14:26:58,609] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 14:29:26 kafka | [2024-07-03 14:26:58,613] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 14:29:26 kafka | [2024-07-03 14:26:58,618] INFO Socket connection established, initiating session, client: /172.17.0.8:45328, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 14:29:26 kafka | [2024-07-03 14:26:58,628] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000029b7990001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 14:29:26 kafka | [2024-07-03 14:26:58,632] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 14:29:26 kafka | [2024-07-03 14:26:58,911] INFO Cluster ID = oH0-pHXbT_qkvD-L4l6e0Q (kafka.server.KafkaServer) 14:29:26 kafka | [2024-07-03 14:26:58,914] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 14:29:26 kafka | [2024-07-03 14:26:58,965] INFO KafkaConfig values: 14:29:26 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 14:29:26 kafka | alter.config.policy.class.name = null 14:29:26 kafka | alter.log.dirs.replication.quota.window.num = 11 14:29:26 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 14:29:26 kafka | authorizer.class.name = 14:29:26 kafka | auto.create.topics.enable = true 14:29:26 kafka | auto.include.jmx.reporter = true 14:29:26 kafka | auto.leader.rebalance.enable = true 14:29:26 kafka | background.threads = 10 14:29:26 kafka | broker.heartbeat.interval.ms = 2000 14:29:26 kafka | broker.id = 1 14:29:26 kafka | broker.id.generation.enable = true 14:29:26 kafka | broker.rack = null 14:29:26 kafka | broker.session.timeout.ms = 9000 14:29:26 kafka | client.quota.callback.class = null 14:29:26 kafka | compression.type = producer 14:29:26 kafka | connection.failed.authentication.delay.ms = 100 14:29:26 kafka | connections.max.idle.ms = 600000 14:29:26 kafka | connections.max.reauth.ms = 0 14:29:26 kafka | control.plane.listener.name = null 14:29:26 kafka | controlled.shutdown.enable = true 14:29:26 kafka | controlled.shutdown.max.retries = 3 14:29:26 kafka | controlled.shutdown.retry.backoff.ms = 5000 14:29:26 kafka | controller.listener.names = null 14:29:26 kafka | controller.quorum.append.linger.ms = 25 14:29:26 kafka | controller.quorum.election.backoff.max.ms = 1000 14:29:26 kafka | controller.quorum.election.timeout.ms = 1000 14:29:26 kafka | controller.quorum.fetch.timeout.ms = 2000 14:29:26 kafka | controller.quorum.request.timeout.ms = 2000 14:29:26 kafka | controller.quorum.retry.backoff.ms = 20 14:29:26 kafka | controller.quorum.voters = [] 14:29:26 kafka | controller.quota.window.num = 11 14:29:26 kafka | controller.quota.window.size.seconds = 1 14:29:26 kafka | controller.socket.timeout.ms = 30000 14:29:26 kafka | create.topic.policy.class.name = null 14:29:26 kafka | default.replication.factor = 1 14:29:26 kafka | delegation.token.expiry.check.interval.ms = 3600000 14:29:26 kafka | delegation.token.expiry.time.ms = 86400000 14:29:26 kafka | delegation.token.master.key = null 14:29:26 kafka | delegation.token.max.lifetime.ms = 604800000 14:29:26 kafka | delegation.token.secret.key = null 14:29:26 kafka | delete.records.purgatory.purge.interval.requests = 1 14:29:26 kafka | delete.topic.enable = true 14:29:26 kafka | early.start.listeners = null 14:29:26 kafka | fetch.max.bytes = 57671680 14:29:26 kafka | fetch.purgatory.purge.interval.requests = 1000 14:29:26 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 14:29:26 kafka | group.consumer.heartbeat.interval.ms = 5000 14:29:26 kafka | group.consumer.max.heartbeat.interval.ms = 15000 14:29:26 kafka | group.consumer.max.session.timeout.ms = 60000 14:29:26 kafka | group.consumer.max.size = 2147483647 14:29:26 kafka | group.consumer.min.heartbeat.interval.ms = 5000 14:29:26 kafka | group.consumer.min.session.timeout.ms = 45000 14:29:26 kafka | group.consumer.session.timeout.ms = 45000 14:29:26 kafka | group.coordinator.new.enable = false 14:29:26 kafka | group.coordinator.threads = 1 14:29:26 kafka | group.initial.rebalance.delay.ms = 3000 14:29:26 kafka | group.max.session.timeout.ms = 1800000 14:29:26 kafka | group.max.size = 2147483647 14:29:26 kafka | group.min.session.timeout.ms = 6000 14:29:26 kafka | initial.broker.registration.timeout.ms = 60000 14:29:26 kafka | inter.broker.listener.name = PLAINTEXT 14:29:26 kafka | inter.broker.protocol.version = 3.6-IV2 14:29:26 kafka | kafka.metrics.polling.interval.secs = 10 14:29:26 kafka | kafka.metrics.reporters = [] 14:29:26 kafka | leader.imbalance.check.interval.seconds = 300 14:29:26 kafka | leader.imbalance.per.broker.percentage = 10 14:29:26 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 14:29:26 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 14:29:26 kafka | log.cleaner.backoff.ms = 15000 14:29:26 kafka | log.cleaner.dedupe.buffer.size = 134217728 14:29:26 kafka | log.cleaner.delete.retention.ms = 86400000 14:29:26 kafka | log.cleaner.enable = true 14:29:26 kafka | log.cleaner.io.buffer.load.factor = 0.9 14:29:26 kafka | log.cleaner.io.buffer.size = 524288 14:29:26 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 14:29:26 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 14:29:26 kafka | log.cleaner.min.cleanable.ratio = 0.5 14:29:26 kafka | log.cleaner.min.compaction.lag.ms = 0 14:29:26 kafka | log.cleaner.threads = 1 14:29:26 kafka | log.cleanup.policy = [delete] 14:29:26 kafka | log.dir = /tmp/kafka-logs 14:29:26 kafka | log.dirs = /var/lib/kafka/data 14:29:26 kafka | log.flush.interval.messages = 9223372036854775807 14:29:26 kafka | log.flush.interval.ms = null 14:29:26 kafka | log.flush.offset.checkpoint.interval.ms = 60000 14:29:26 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 14:29:26 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 14:29:26 kafka | log.index.interval.bytes = 4096 14:29:26 kafka | log.index.size.max.bytes = 10485760 14:29:26 kafka | log.local.retention.bytes = -2 14:29:26 kafka | log.local.retention.ms = -2 14:29:26 kafka | log.message.downconversion.enable = true 14:29:26 kafka | log.message.format.version = 3.0-IV1 14:29:26 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 14:29:26 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 14:29:26 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 14:29:26 kafka | log.message.timestamp.type = CreateTime 14:29:26 kafka | log.preallocate = false 14:29:26 kafka | log.retention.bytes = -1 14:29:26 kafka | log.retention.check.interval.ms = 300000 14:29:26 kafka | log.retention.hours = 168 14:29:26 kafka | log.retention.minutes = null 14:29:26 kafka | log.retention.ms = null 14:29:26 kafka | log.roll.hours = 168 14:29:26 kafka | log.roll.jitter.hours = 0 14:29:26 kafka | log.roll.jitter.ms = null 14:29:26 kafka | log.roll.ms = null 14:29:26 kafka | log.segment.bytes = 1073741824 14:29:26 kafka | log.segment.delete.delay.ms = 60000 14:29:26 kafka | max.connection.creation.rate = 2147483647 14:29:26 kafka | max.connections = 2147483647 14:29:26 kafka | max.connections.per.ip = 2147483647 14:29:26 kafka | max.connections.per.ip.overrides = 14:29:26 kafka | max.incremental.fetch.session.cache.slots = 1000 14:29:26 kafka | message.max.bytes = 1048588 14:29:26 kafka | metadata.log.dir = null 14:29:26 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 14:29:26 kafka | metadata.log.max.snapshot.interval.ms = 3600000 14:29:26 kafka | metadata.log.segment.bytes = 1073741824 14:29:26 kafka | metadata.log.segment.min.bytes = 8388608 14:29:26 kafka | metadata.log.segment.ms = 604800000 14:29:26 kafka | metadata.max.idle.interval.ms = 500 14:29:26 kafka | metadata.max.retention.bytes = 104857600 14:29:26 kafka | metadata.max.retention.ms = 604800000 14:29:26 kafka | metric.reporters = [] 14:29:26 kafka | metrics.num.samples = 2 14:29:26 kafka | metrics.recording.level = INFO 14:29:26 kafka | metrics.sample.window.ms = 30000 14:29:26 kafka | min.insync.replicas = 1 14:29:26 kafka | node.id = 1 14:29:26 kafka | num.io.threads = 8 14:29:26 kafka | num.network.threads = 3 14:29:26 kafka | num.partitions = 1 14:29:26 kafka | num.recovery.threads.per.data.dir = 1 14:29:26 kafka | num.replica.alter.log.dirs.threads = null 14:29:26 kafka | num.replica.fetchers = 1 14:29:26 kafka | offset.metadata.max.bytes = 4096 14:29:26 kafka | offsets.commit.required.acks = -1 14:29:26 kafka | offsets.commit.timeout.ms = 5000 14:29:26 kafka | offsets.load.buffer.size = 5242880 14:29:26 kafka | offsets.retention.check.interval.ms = 600000 14:29:26 kafka | offsets.retention.minutes = 10080 14:29:26 kafka | offsets.topic.compression.codec = 0 14:29:26 kafka | offsets.topic.num.partitions = 50 14:29:26 kafka | offsets.topic.replication.factor = 1 14:29:26 kafka | offsets.topic.segment.bytes = 104857600 14:29:26 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 14:29:26 kafka | password.encoder.iterations = 4096 14:29:26 kafka | password.encoder.key.length = 128 14:29:26 kafka | password.encoder.keyfactory.algorithm = null 14:29:26 kafka | password.encoder.old.secret = null 14:29:26 kafka | password.encoder.secret = null 14:29:26 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 14:29:26 kafka | process.roles = [] 14:29:26 kafka | producer.id.expiration.check.interval.ms = 600000 14:29:26 kafka | producer.id.expiration.ms = 86400000 14:29:26 kafka | producer.purgatory.purge.interval.requests = 1000 14:29:26 kafka | queued.max.request.bytes = -1 14:29:26 kafka | queued.max.requests = 500 14:29:26 kafka | quota.window.num = 11 14:29:26 kafka | quota.window.size.seconds = 1 14:29:26 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 14:29:26 kafka | remote.log.manager.task.interval.ms = 30000 14:29:26 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 14:29:26 kafka | remote.log.manager.task.retry.backoff.ms = 500 14:29:26 kafka | remote.log.manager.task.retry.jitter = 0.2 14:29:26 kafka | remote.log.manager.thread.pool.size = 10 14:29:26 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 14:29:26 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 14:29:26 kafka | remote.log.metadata.manager.class.path = null 14:29:26 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 14:29:26 kafka | remote.log.metadata.manager.listener.name = null 14:29:26 kafka | remote.log.reader.max.pending.tasks = 100 14:29:26 kafka | remote.log.reader.threads = 10 14:29:26 kafka | remote.log.storage.manager.class.name = null 14:29:26 kafka | remote.log.storage.manager.class.path = null 14:29:26 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 14:29:26 kafka | remote.log.storage.system.enable = false 14:29:26 kafka | replica.fetch.backoff.ms = 1000 14:29:26 kafka | replica.fetch.max.bytes = 1048576 14:29:26 kafka | replica.fetch.min.bytes = 1 14:29:26 kafka | replica.fetch.response.max.bytes = 10485760 14:29:26 kafka | replica.fetch.wait.max.ms = 500 14:29:26 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 14:29:26 kafka | replica.lag.time.max.ms = 30000 14:29:26 kafka | replica.selector.class = null 14:29:26 kafka | replica.socket.receive.buffer.bytes = 65536 14:29:26 kafka | replica.socket.timeout.ms = 30000 14:29:26 kafka | replication.quota.window.num = 11 14:29:26 kafka | replication.quota.window.size.seconds = 1 14:29:26 kafka | request.timeout.ms = 30000 14:29:26 kafka | reserved.broker.max.id = 1000 14:29:26 kafka | sasl.client.callback.handler.class = null 14:29:26 kafka | sasl.enabled.mechanisms = [GSSAPI] 14:29:26 kafka | sasl.jaas.config = null 14:29:26 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:29:26 kafka | sasl.kerberos.min.time.before.relogin = 60000 14:29:26 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 14:29:26 kafka | sasl.kerberos.service.name = null 14:29:26 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 14:29:26 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 14:29:26 kafka | sasl.login.callback.handler.class = null 14:29:26 kafka | sasl.login.class = null 14:29:26 kafka | sasl.login.connect.timeout.ms = null 14:29:26 kafka | sasl.login.read.timeout.ms = null 14:29:26 kafka | sasl.login.refresh.buffer.seconds = 300 14:29:26 kafka | sasl.login.refresh.min.period.seconds = 60 14:29:26 kafka | sasl.login.refresh.window.factor = 0.8 14:29:26 kafka | sasl.login.refresh.window.jitter = 0.05 14:29:26 kafka | sasl.login.retry.backoff.max.ms = 10000 14:29:26 kafka | sasl.login.retry.backoff.ms = 100 14:29:26 kafka | sasl.mechanism.controller.protocol = GSSAPI 14:29:26 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 14:29:26 kafka | sasl.oauthbearer.clock.skew.seconds = 30 14:29:26 kafka | sasl.oauthbearer.expected.audience = null 14:29:26 kafka | sasl.oauthbearer.expected.issuer = null 14:29:26 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:29:26 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:29:26 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:29:26 kafka | sasl.oauthbearer.jwks.endpoint.url = null 14:29:26 kafka | sasl.oauthbearer.scope.claim.name = scope 14:29:26 kafka | sasl.oauthbearer.sub.claim.name = sub 14:29:26 kafka | sasl.oauthbearer.token.endpoint.url = null 14:29:26 kafka | sasl.server.callback.handler.class = null 14:29:26 kafka | sasl.server.max.receive.size = 524288 14:29:26 kafka | security.inter.broker.protocol = PLAINTEXT 14:29:26 kafka | security.providers = null 14:29:26 kafka | server.max.startup.time.ms = 9223372036854775807 14:29:26 kafka | socket.connection.setup.timeout.max.ms = 30000 14:29:26 kafka | socket.connection.setup.timeout.ms = 10000 14:29:26 kafka | socket.listen.backlog.size = 50 14:29:26 kafka | socket.receive.buffer.bytes = 102400 14:29:26 kafka | socket.request.max.bytes = 104857600 14:29:26 kafka | socket.send.buffer.bytes = 102400 14:29:26 kafka | ssl.cipher.suites = [] 14:29:26 kafka | ssl.client.auth = none 14:29:26 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:29:26 kafka | ssl.endpoint.identification.algorithm = https 14:29:26 kafka | ssl.engine.factory.class = null 14:29:26 kafka | ssl.key.password = null 14:29:26 kafka | ssl.keymanager.algorithm = SunX509 14:29:26 kafka | ssl.keystore.certificate.chain = null 14:29:26 kafka | ssl.keystore.key = null 14:29:26 kafka | ssl.keystore.location = null 14:29:26 kafka | ssl.keystore.password = null 14:29:26 kafka | ssl.keystore.type = JKS 14:29:26 kafka | ssl.principal.mapping.rules = DEFAULT 14:29:26 kafka | ssl.protocol = TLSv1.3 14:29:26 kafka | ssl.provider = null 14:29:26 kafka | ssl.secure.random.implementation = null 14:29:26 kafka | ssl.trustmanager.algorithm = PKIX 14:29:26 kafka | ssl.truststore.certificates = null 14:29:26 kafka | ssl.truststore.location = null 14:29:26 kafka | ssl.truststore.password = null 14:29:26 kafka | ssl.truststore.type = JKS 14:29:26 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 14:29:26 kafka | transaction.max.timeout.ms = 900000 14:29:26 kafka | transaction.partition.verification.enable = true 14:29:26 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 14:29:26 kafka | transaction.state.log.load.buffer.size = 5242880 14:29:26 kafka | transaction.state.log.min.isr = 2 14:29:26 kafka | transaction.state.log.num.partitions = 50 14:29:26 kafka | transaction.state.log.replication.factor = 3 14:29:26 kafka | transaction.state.log.segment.bytes = 104857600 14:29:26 kafka | transactional.id.expiration.ms = 604800000 14:29:26 kafka | unclean.leader.election.enable = false 14:29:26 kafka | unstable.api.versions.enable = false 14:29:26 kafka | zookeeper.clientCnxnSocket = null 14:29:26 kafka | zookeeper.connect = zookeeper:2181 14:29:26 kafka | zookeeper.connection.timeout.ms = null 14:29:26 kafka | zookeeper.max.in.flight.requests = 10 14:29:26 kafka | zookeeper.metadata.migration.enable = false 14:29:26 kafka | zookeeper.metadata.migration.min.batch.size = 200 14:29:26 kafka | zookeeper.session.timeout.ms = 18000 14:29:26 kafka | zookeeper.set.acl = false 14:29:26 kafka | zookeeper.ssl.cipher.suites = null 14:29:26 kafka | zookeeper.ssl.client.enable = false 14:29:26 kafka | zookeeper.ssl.crl.enable = false 14:29:26 kafka | zookeeper.ssl.enabled.protocols = null 14:29:26 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 14:29:26 kafka | zookeeper.ssl.keystore.location = null 14:29:26 kafka | zookeeper.ssl.keystore.password = null 14:29:26 kafka | zookeeper.ssl.keystore.type = null 14:29:26 kafka | zookeeper.ssl.ocsp.enable = false 14:29:26 kafka | zookeeper.ssl.protocol = TLSv1.2 14:29:26 kafka | zookeeper.ssl.truststore.location = null 14:29:26 kafka | zookeeper.ssl.truststore.password = null 14:29:26 kafka | zookeeper.ssl.truststore.type = null 14:29:26 kafka | (kafka.server.KafkaConfig) 14:29:26 kafka | [2024-07-03 14:26:58,992] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 14:29:26 kafka | [2024-07-03 14:26:58,992] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 14:29:26 kafka | [2024-07-03 14:26:58,994] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 14:29:26 kafka | [2024-07-03 14:26:58,995] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 14:29:26 kafka | [2024-07-03 14:26:59,020] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:26:59,024] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:26:59,033] INFO Loaded 0 logs in 12ms (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:26:59,034] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:26:59,035] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:26:59,046] INFO Starting the log cleaner (kafka.log.LogCleaner) 14:29:26 kafka | [2024-07-03 14:26:59,090] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 14:29:26 kafka | [2024-07-03 14:26:59,105] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 14:29:26 kafka | [2024-07-03 14:26:59,118] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 14:29:26 kafka | [2024-07-03 14:26:59,161] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 14:29:26 kafka | [2024-07-03 14:26:59,477] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 14:29:26 kafka | [2024-07-03 14:26:59,496] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 14:29:26 kafka | [2024-07-03 14:26:59,497] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 14:29:26 kafka | [2024-07-03 14:26:59,502] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 14:29:26 kafka | [2024-07-03 14:26:59,506] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 14:29:26 kafka | [2024-07-03 14:26:59,529] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:29:26 kafka | [2024-07-03 14:26:59,530] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:29:26 kafka | [2024-07-03 14:26:59,532] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:29:26 kafka | [2024-07-03 14:26:59,533] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:29:26 kafka | [2024-07-03 14:26:59,535] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:29:26 kafka | [2024-07-03 14:26:59,546] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 14:29:26 kafka | [2024-07-03 14:26:59,547] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 14:29:26 kafka | [2024-07-03 14:26:59,568] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 14:29:26 kafka | [2024-07-03 14:26:59,592] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1720016819584,1720016819584,1,0,0,72057773211844609,258,0,27 14:29:26 kafka | (kafka.zk.KafkaZkClient) 14:29:26 kafka | [2024-07-03 14:26:59,593] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 14:29:26 kafka | [2024-07-03 14:26:59,653] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 14:29:26 kafka | [2024-07-03 14:26:59,659] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:29:26 kafka | [2024-07-03 14:26:59,665] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:29:26 kafka | [2024-07-03 14:26:59,666] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:29:26 kafka | [2024-07-03 14:26:59,672] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 14:29:26 kafka | [2024-07-03 14:26:59,680] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:26:59,681] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,684] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:26:59,685] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,691] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 14:29:26 kafka | [2024-07-03 14:26:59,708] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 14:29:26 kafka | [2024-07-03 14:26:59,714] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 14:29:26 kafka | [2024-07-03 14:26:59,714] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,716] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 14:29:26 kafka | [2024-07-03 14:26:59,716] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 14:29:26 kafka | [2024-07-03 14:26:59,719] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,723] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,725] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,741] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,745] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,750] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 14:29:26 kafka | [2024-07-03 14:26:59,756] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 14:29:26 kafka | [2024-07-03 14:26:59,756] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:29:26 kafka | [2024-07-03 14:26:59,757] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,758] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,758] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,758] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,760] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,761] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,761] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,762] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 14:29:26 kafka | [2024-07-03 14:26:59,763] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,765] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 14:29:26 kafka | [2024-07-03 14:26:59,772] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 14:29:26 kafka | [2024-07-03 14:26:59,773] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 14:29:26 kafka | [2024-07-03 14:26:59,776] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 14:29:26 kafka | [2024-07-03 14:26:59,776] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 14:29:26 kafka | [2024-07-03 14:26:59,777] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 14:29:26 kafka | [2024-07-03 14:26:59,778] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 14:29:26 kafka | [2024-07-03 14:26:59,780] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 14:29:26 kafka | [2024-07-03 14:26:59,780] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,780] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 14:29:26 kafka | [2024-07-03 14:26:59,782] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.8:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 14:29:26 kafka | [2024-07-03 14:26:59,784] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) 14:29:26 kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. 14:29:26 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 14:29:26 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) 14:29:26 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) 14:29:26 kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) 14:29:26 kafka | [2024-07-03 14:26:59,789] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) 14:29:26 kafka | [2024-07-03 14:26:59,785] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,789] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 14:29:26 kafka | [2024-07-03 14:26:59,790] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,790] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,791] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,793] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,804] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 14:29:26 kafka | [2024-07-03 14:26:59,805] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:26:59,808] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 14:29:26 kafka | [2024-07-03 14:26:59,810] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 14:29:26 kafka | [2024-07-03 14:26:59,819] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) 14:29:26 kafka | [2024-07-03 14:26:59,819] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) 14:29:26 kafka | [2024-07-03 14:26:59,819] INFO Kafka startTimeMs: 1720016819814 (org.apache.kafka.common.utils.AppInfoParser) 14:29:26 kafka | [2024-07-03 14:26:59,820] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 14:29:26 kafka | [2024-07-03 14:26:59,891] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 14:29:26 kafka | [2024-07-03 14:26:59,949] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:26:59,973] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 14:29:26 kafka | [2024-07-03 14:27:00,008] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 14:29:26 kafka | [2024-07-03 14:27:04,806] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:27:04,806] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:27:23,309] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:27:23,310] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 14:29:26 kafka | [2024-07-03 14:27:23,312] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 14:29:26 kafka | [2024-07-03 14:27:23,327] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:27:23,363] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(eAOj3Kj-RK-B_kDbDL1rIg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(r0dxtfWfS62-f-_Ua7qa2A),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:27:23,367] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 14:29:26 kafka | [2024-07-03 14:27:23,371] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,371] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,372] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,372] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,372] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,372] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,372] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,372] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,373] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,373] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,373] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,373] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,373] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,373] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,374] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,374] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,374] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,374] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,374] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,374] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,375] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,375] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,375] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,375] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,375] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,377] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,379] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,380] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,380] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,385] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,391] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,391] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,391] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,391] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,391] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,392] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,392] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,392] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,392] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,392] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,392] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,392] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,392] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,393] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,394] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,395] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,395] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,395] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,395] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,396] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,396] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,399] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,400] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,400] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,565] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,566] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,567] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,571] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,574] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,575] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,575] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,575] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,575] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,576] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,577] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,577] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,583] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,585] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,586] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,586] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,586] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,586] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,586] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,587] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,587] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,587] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,587] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,587] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,587] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,588] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,588] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,588] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,588] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,588] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,588] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,588] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,589] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,589] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,589] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,589] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,589] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,589] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,589] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,590] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,590] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,590] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,591] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,591] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,591] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,592] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,592] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,592] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,593] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,594] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,594] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,594] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,594] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,594] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,594] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,594] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,639] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,640] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,642] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 14:29:26 kafka | [2024-07-03 14:27:23,642] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,706] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,722] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,727] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,729] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,731] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,755] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,757] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,757] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,757] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,758] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,766] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,768] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,768] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,768] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,768] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,776] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,776] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,777] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,777] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,777] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,784] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,785] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,785] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,785] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,785] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,793] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,794] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,794] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,794] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,794] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,804] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,805] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,806] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,806] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,806] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,814] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,815] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,815] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,815] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,815] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,822] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,822] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,822] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,822] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,823] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,829] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,832] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,832] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,832] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,832] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,843] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,844] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,844] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,844] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,844] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,850] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,851] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,851] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,851] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,851] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,860] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,860] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,860] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,861] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,861] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,868] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,868] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,868] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,868] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,869] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,875] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,876] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,876] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,876] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,876] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,885] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,886] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,886] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,886] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,886] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,895] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,896] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,896] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,896] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,896] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,903] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,904] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,905] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,905] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,905] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,914] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,915] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,915] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,915] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,915] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,930] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,931] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,931] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,931] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,931] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,939] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,940] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,940] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,940] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,940] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,945] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,945] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,945] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,946] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,946] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,953] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,954] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,954] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,954] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,954] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,961] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,962] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,962] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,962] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,962] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,970] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,971] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,971] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,971] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,972] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,977] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,978] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,978] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,978] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,978] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,984] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,985] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,985] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,985] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,985] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,991] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,991] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,991] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,991] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,991] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:23,997] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:23,997] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:23,997] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,998] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:23,998] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,003] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,004] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,005] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,005] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,005] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,011] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,012] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,012] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,012] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,012] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,020] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,022] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,022] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,023] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,023] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,031] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,032] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,033] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,033] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,033] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,043] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,044] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,044] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,044] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,045] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,055] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,056] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,056] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,056] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,056] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(eAOj3Kj-RK-B_kDbDL1rIg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,063] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,064] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,064] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,064] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,065] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,076] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,077] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,077] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,077] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,077] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,085] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,086] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,086] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,086] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,086] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,094] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,095] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,095] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,095] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,095] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,102] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,103] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,103] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,103] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,103] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,113] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,114] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,114] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,114] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,115] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,121] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,121] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,121] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,122] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,122] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,131] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,132] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,132] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,132] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,132] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,140] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,141] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,141] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,141] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,141] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,148] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,149] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,149] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,149] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,149] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,160] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,161] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,161] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,161] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,161] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,172] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,173] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,173] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,173] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,173] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,180] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,181] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,181] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,181] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,181] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,187] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,188] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,188] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,188] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,188] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,194] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,194] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,194] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,194] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,194] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,199] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:29:26 kafka | [2024-07-03 14:27:24,199] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:29:26 kafka | [2024-07-03 14:27:24,200] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,200] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 14:29:26 kafka | [2024-07-03 14:27:24,200] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(r0dxtfWfS62-f-_Ua7qa2A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,204] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,210] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,212] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,213] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,213] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,213] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,213] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,213] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,213] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,213] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,213] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,213] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,213] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,213] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,213] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,214] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,214] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,214] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,214] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,214] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,214] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,214] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,214] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,214] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,215] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,216] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,217] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,217] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,217] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,217] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,217] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,217] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,217] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,217] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,217] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,217] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,217] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,217] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,219] INFO [Broker id=1] Finished LeaderAndIsr request in 638ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,221] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 8 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,222] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,223] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,223] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,223] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,223] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,223] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,224] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,224] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,224] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,224] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,224] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,224] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,225] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,225] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,228] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=r0dxtfWfS62-f-_Ua7qa2A, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=eAOj3Kj-RK-B_kDbDL1rIg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,228] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 13 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,228] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,228] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,228] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,228] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,229] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,229] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,229] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,229] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,230] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,231] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,231] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,231] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,231] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,231] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,232] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,232] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,232] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,232] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,232] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,232] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,233] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,233] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,233] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,233] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,233] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,233] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,234] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,234] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,234] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:29:26 kafka | [2024-07-03 14:27:24,239] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,240] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,240] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,240] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,240] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,240] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,240] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,240] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,241] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,242] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,243] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,244] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,244] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,244] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,246] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,246] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:29:26 kafka | [2024-07-03 14:27:24,331] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-80d763bd-86c7-4846-b555-35a51ccdc9fc and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,345] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-80d763bd-86c7-4846-b555-35a51ccdc9fc with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,352] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 98eef03c-6c97-41d2-b0d1-0e3fd148d393 in Empty state. Created a new member id consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3-96ab80bf-0485-4af4-8e22-f388a6176ea5 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,355] INFO [GroupCoordinator 1]: Preparing to rebalance group 98eef03c-6c97-41d2-b0d1-0e3fd148d393 in state PreparingRebalance with old generation 0 (__consumer_offsets-18) (reason: Adding new member consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3-96ab80bf-0485-4af4-8e22-f388a6176ea5 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,582] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group d8b2b84c-6638-4843-9df5-de6a0e09886f in Empty state. Created a new member id consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2-02e1bbb6-69ac-41b2-aef0-bbd52b0c50ab and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:24,586] INFO [GroupCoordinator 1]: Preparing to rebalance group d8b2b84c-6638-4843-9df5-de6a0e09886f in state PreparingRebalance with old generation 0 (__consumer_offsets-29) (reason: Adding new member consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2-02e1bbb6-69ac-41b2-aef0-bbd52b0c50ab with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:27,358] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:27,361] INFO [GroupCoordinator 1]: Stabilized group 98eef03c-6c97-41d2-b0d1-0e3fd148d393 generation 1 (__consumer_offsets-18) with 1 members (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:27,383] INFO [GroupCoordinator 1]: Assignment received from leader consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3-96ab80bf-0485-4af4-8e22-f388a6176ea5 for group 98eef03c-6c97-41d2-b0d1-0e3fd148d393 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:27,384] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-80d763bd-86c7-4846-b555-35a51ccdc9fc for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:27,587] INFO [GroupCoordinator 1]: Stabilized group d8b2b84c-6638-4843-9df5-de6a0e09886f generation 1 (__consumer_offsets-29) with 1 members (kafka.coordinator.group.GroupCoordinator) 14:29:26 kafka | [2024-07-03 14:27:27,601] INFO [GroupCoordinator 1]: Assignment received from leader consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2-02e1bbb6-69ac-41b2-aef0-bbd52b0c50ab for group d8b2b84c-6638-4843-9df5-de6a0e09886f for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 14:29:26 =================================== 14:29:26 ======== Logs from mariadb ======== 14:29:26 mariadb | 2024-07-03 14:26:47+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 14:29:26 mariadb | 2024-07-03 14:26:47+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 14:29:26 mariadb | 2024-07-03 14:26:47+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 14:29:26 mariadb | 2024-07-03 14:26:47+00:00 [Note] [Entrypoint]: Initializing database files 14:29:26 mariadb | 2024-07-03 14:26:47 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 14:29:26 mariadb | 2024-07-03 14:26:47 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 14:29:26 mariadb | 2024-07-03 14:26:47 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 14:29:26 mariadb | 14:29:26 mariadb | 14:29:26 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 14:29:26 mariadb | To do so, start the server, then issue the following command: 14:29:26 mariadb | 14:29:26 mariadb | '/usr/bin/mysql_secure_installation' 14:29:26 mariadb | 14:29:26 mariadb | which will also give you the option of removing the test 14:29:26 mariadb | databases and anonymous user created by default. This is 14:29:26 mariadb | strongly recommended for production servers. 14:29:26 mariadb | 14:29:26 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 14:29:26 mariadb | 14:29:26 mariadb | Please report any problems at https://mariadb.org/jira 14:29:26 mariadb | 14:29:26 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 14:29:26 mariadb | 14:29:26 mariadb | Consider joining MariaDB's strong and vibrant community: 14:29:26 mariadb | https://mariadb.org/get-involved/ 14:29:26 mariadb | 14:29:26 mariadb | 2024-07-03 14:26:48+00:00 [Note] [Entrypoint]: Database files initialized 14:29:26 mariadb | 2024-07-03 14:26:48+00:00 [Note] [Entrypoint]: Starting temporary server 14:29:26 mariadb | 2024-07-03 14:26:48+00:00 [Note] [Entrypoint]: Waiting for server startup 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 99 ... 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: Number of transaction pools: 1 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: Completed initialization of buffer pool 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: 128 rollback segments are active. 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Note] InnoDB: log sequence number 45452; transaction id 14 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Note] Plugin 'FEEDBACK' is disabled. 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 14:29:26 mariadb | 2024-07-03 14:26:49 0 [Note] mariadbd: ready for connections. 14:29:26 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 14:29:26 mariadb | 2024-07-03 14:26:50+00:00 [Note] [Entrypoint]: Temporary server started. 14:29:26 mariadb | 2024-07-03 14:26:51+00:00 [Note] [Entrypoint]: Creating user policy_user 14:29:26 mariadb | 2024-07-03 14:26:51+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 14:29:26 mariadb | 14:29:26 mariadb | 2024-07-03 14:26:51+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 14:29:26 mariadb | 14:29:26 mariadb | 2024-07-03 14:26:51+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 14:29:26 mariadb | #!/bin/bash -xv 14:29:26 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 14:29:26 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 14:29:26 mariadb | # 14:29:26 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 14:29:26 mariadb | # you may not use this file except in compliance with the License. 14:29:26 mariadb | # You may obtain a copy of the License at 14:29:26 mariadb | # 14:29:26 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 14:29:26 mariadb | # 14:29:26 mariadb | # Unless required by applicable law or agreed to in writing, software 14:29:26 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 14:29:26 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14:29:26 mariadb | # See the License for the specific language governing permissions and 14:29:26 mariadb | # limitations under the License. 14:29:26 mariadb | 14:29:26 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:29:26 mariadb | do 14:29:26 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 14:29:26 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 14:29:26 mariadb | done 14:29:26 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:29:26 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 14:29:26 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:29:26 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:29:26 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 14:29:26 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:29:26 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:29:26 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 14:29:26 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:29:26 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:29:26 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 14:29:26 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:29:26 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:29:26 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 14:29:26 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:29:26 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:29:26 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 14:29:26 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:29:26 mariadb | 14:29:26 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 14:29:26 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 14:29:26 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 14:29:26 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 14:29:26 mariadb | 14:29:26 mariadb | 2024-07-03 14:26:52+00:00 [Note] [Entrypoint]: Stopping temporary server 14:29:26 mariadb | 2024-07-03 14:26:52 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 14:29:26 mariadb | 2024-07-03 14:26:52 0 [Note] InnoDB: FTS optimize thread exiting. 14:29:26 mariadb | 2024-07-03 14:26:52 0 [Note] InnoDB: Starting shutdown... 14:29:26 mariadb | 2024-07-03 14:26:52 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 14:29:26 mariadb | 2024-07-03 14:26:52 0 [Note] InnoDB: Buffer pool(s) dump completed at 240703 14:26:52 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Shutdown completed; log sequence number 330344; transaction id 298 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] mariadbd: Shutdown complete 14:29:26 mariadb | 14:29:26 mariadb | 2024-07-03 14:26:53+00:00 [Note] [Entrypoint]: Temporary server stopped 14:29:26 mariadb | 14:29:26 mariadb | 2024-07-03 14:26:53+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 14:29:26 mariadb | 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Number of transaction pools: 1 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Completed initialization of buffer pool 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: 128 rollback segments are active. 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: log sequence number 330344; transaction id 299 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] Plugin 'FEEDBACK' is disabled. 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] Server socket created on IP: '0.0.0.0'. 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] Server socket created on IP: '::'. 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] mariadbd: ready for connections. 14:29:26 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 14:29:26 mariadb | 2024-07-03 14:26:53 0 [Note] InnoDB: Buffer pool(s) load completed at 240703 14:26:53 14:29:26 mariadb | 2024-07-03 14:26:53 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 14:29:26 mariadb | 2024-07-03 14:26:53 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 14:29:26 mariadb | 2024-07-03 14:26:54 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) 14:29:26 mariadb | 2024-07-03 14:26:54 32 [Warning] Aborted connection 32 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 14:29:26 =================================== 14:29:26 ======== Logs from apex-pdp ======== 14:29:26 policy-apex-pdp | Waiting for mariadb port 3306... 14:29:26 policy-apex-pdp | mariadb (172.17.0.3:3306) open 14:29:26 policy-apex-pdp | Waiting for kafka port 9092... 14:29:26 policy-apex-pdp | Waiting for pap port 6969... 14:29:26 policy-apex-pdp | kafka (172.17.0.8:9092) open 14:29:26 policy-apex-pdp | pap (172.17.0.10:6969) open 14:29:26 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 14:29:26 policy-apex-pdp | [2024-07-03T14:27:23.664+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 14:29:26 policy-apex-pdp | [2024-07-03T14:27:23.824+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:29:26 policy-apex-pdp | allow.auto.create.topics = true 14:29:26 policy-apex-pdp | auto.commit.interval.ms = 5000 14:29:26 policy-apex-pdp | auto.include.jmx.reporter = true 14:29:26 policy-apex-pdp | auto.offset.reset = latest 14:29:26 policy-apex-pdp | bootstrap.servers = [kafka:9092] 14:29:26 policy-apex-pdp | check.crcs = true 14:29:26 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 14:29:26 policy-apex-pdp | client.id = consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-1 14:29:26 policy-apex-pdp | client.rack = 14:29:26 policy-apex-pdp | connections.max.idle.ms = 540000 14:29:26 policy-apex-pdp | default.api.timeout.ms = 60000 14:29:26 policy-apex-pdp | enable.auto.commit = true 14:29:26 policy-apex-pdp | exclude.internal.topics = true 14:29:26 policy-apex-pdp | fetch.max.bytes = 52428800 14:29:26 policy-apex-pdp | fetch.max.wait.ms = 500 14:29:26 policy-apex-pdp | fetch.min.bytes = 1 14:29:26 policy-apex-pdp | group.id = d8b2b84c-6638-4843-9df5-de6a0e09886f 14:29:26 policy-apex-pdp | group.instance.id = null 14:29:26 policy-apex-pdp | heartbeat.interval.ms = 3000 14:29:26 policy-apex-pdp | interceptor.classes = [] 14:29:26 policy-apex-pdp | internal.leave.group.on.close = true 14:29:26 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 14:29:26 policy-apex-pdp | isolation.level = read_uncommitted 14:29:26 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:29:26 policy-apex-pdp | max.partition.fetch.bytes = 1048576 14:29:26 policy-apex-pdp | max.poll.interval.ms = 300000 14:29:26 policy-apex-pdp | max.poll.records = 500 14:29:26 policy-apex-pdp | metadata.max.age.ms = 300000 14:29:26 policy-apex-pdp | metric.reporters = [] 14:29:26 policy-apex-pdp | metrics.num.samples = 2 14:29:26 policy-apex-pdp | metrics.recording.level = INFO 14:29:26 policy-apex-pdp | metrics.sample.window.ms = 30000 14:29:26 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:29:26 policy-apex-pdp | receive.buffer.bytes = 65536 14:29:26 policy-apex-pdp | reconnect.backoff.max.ms = 1000 14:29:26 policy-apex-pdp | reconnect.backoff.ms = 50 14:29:26 policy-apex-pdp | request.timeout.ms = 30000 14:29:26 policy-apex-pdp | retry.backoff.ms = 100 14:29:26 policy-apex-pdp | sasl.client.callback.handler.class = null 14:29:26 policy-apex-pdp | sasl.jaas.config = null 14:29:26 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:29:26 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 14:29:26 policy-apex-pdp | sasl.kerberos.service.name = null 14:29:26 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 14:29:26 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 14:29:26 policy-apex-pdp | sasl.login.callback.handler.class = null 14:29:26 policy-apex-pdp | sasl.login.class = null 14:29:26 policy-apex-pdp | sasl.login.connect.timeout.ms = null 14:29:26 policy-apex-pdp | sasl.login.read.timeout.ms = null 14:29:26 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 14:29:26 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 14:29:26 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 14:29:26 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 14:29:26 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 14:29:26 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 14:29:26 policy-apex-pdp | sasl.mechanism = GSSAPI 14:29:26 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 14:29:26 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 14:29:26 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 14:29:26 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:29:26 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:29:26 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:29:26 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 14:29:26 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 14:29:26 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 14:29:26 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 14:29:26 policy-apex-pdp | security.protocol = PLAINTEXT 14:29:26 policy-apex-pdp | security.providers = null 14:29:26 policy-apex-pdp | send.buffer.bytes = 131072 14:29:26 policy-apex-pdp | session.timeout.ms = 45000 14:29:26 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 14:29:26 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 14:29:26 policy-apex-pdp | ssl.cipher.suites = null 14:29:26 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:29:26 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 14:29:26 policy-apex-pdp | ssl.engine.factory.class = null 14:29:26 policy-apex-pdp | ssl.key.password = null 14:29:26 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 14:29:26 policy-apex-pdp | ssl.keystore.certificate.chain = null 14:29:26 policy-apex-pdp | ssl.keystore.key = null 14:29:26 policy-apex-pdp | ssl.keystore.location = null 14:29:26 policy-apex-pdp | ssl.keystore.password = null 14:29:26 policy-apex-pdp | ssl.keystore.type = JKS 14:29:26 policy-apex-pdp | ssl.protocol = TLSv1.3 14:29:26 policy-apex-pdp | ssl.provider = null 14:29:26 policy-apex-pdp | ssl.secure.random.implementation = null 14:29:26 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 14:29:26 policy-apex-pdp | ssl.truststore.certificates = null 14:29:26 policy-apex-pdp | ssl.truststore.location = null 14:29:26 policy-apex-pdp | ssl.truststore.password = null 14:29:26 policy-apex-pdp | ssl.truststore.type = JKS 14:29:26 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:29:26 policy-apex-pdp | 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.037+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.038+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.038+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016844036 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.041+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-1, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Subscribed to topic(s): policy-pdp-pap 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.056+00:00|INFO|ServiceManager|main] service manager starting 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.056+00:00|INFO|ServiceManager|main] service manager starting topics 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.058+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d8b2b84c-6638-4843-9df5-de6a0e09886f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.088+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:29:26 policy-apex-pdp | allow.auto.create.topics = true 14:29:26 policy-apex-pdp | auto.commit.interval.ms = 5000 14:29:26 policy-apex-pdp | auto.include.jmx.reporter = true 14:29:26 policy-apex-pdp | auto.offset.reset = latest 14:29:26 policy-apex-pdp | bootstrap.servers = [kafka:9092] 14:29:26 policy-apex-pdp | check.crcs = true 14:29:26 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 14:29:26 policy-apex-pdp | client.id = consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2 14:29:26 policy-apex-pdp | client.rack = 14:29:26 policy-apex-pdp | connections.max.idle.ms = 540000 14:29:26 policy-apex-pdp | default.api.timeout.ms = 60000 14:29:26 policy-apex-pdp | enable.auto.commit = true 14:29:26 policy-apex-pdp | exclude.internal.topics = true 14:29:26 policy-apex-pdp | fetch.max.bytes = 52428800 14:29:26 policy-apex-pdp | fetch.max.wait.ms = 500 14:29:26 policy-apex-pdp | fetch.min.bytes = 1 14:29:26 policy-apex-pdp | group.id = d8b2b84c-6638-4843-9df5-de6a0e09886f 14:29:26 policy-apex-pdp | group.instance.id = null 14:29:26 policy-apex-pdp | heartbeat.interval.ms = 3000 14:29:26 policy-apex-pdp | interceptor.classes = [] 14:29:26 policy-apex-pdp | internal.leave.group.on.close = true 14:29:26 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 14:29:26 policy-apex-pdp | isolation.level = read_uncommitted 14:29:26 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:29:26 policy-apex-pdp | max.partition.fetch.bytes = 1048576 14:29:26 policy-apex-pdp | max.poll.interval.ms = 300000 14:29:26 policy-apex-pdp | max.poll.records = 500 14:29:26 policy-apex-pdp | metadata.max.age.ms = 300000 14:29:26 policy-apex-pdp | metric.reporters = [] 14:29:26 policy-apex-pdp | metrics.num.samples = 2 14:29:26 policy-apex-pdp | metrics.recording.level = INFO 14:29:26 policy-apex-pdp | metrics.sample.window.ms = 30000 14:29:26 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:29:26 policy-apex-pdp | receive.buffer.bytes = 65536 14:29:26 policy-apex-pdp | reconnect.backoff.max.ms = 1000 14:29:26 policy-apex-pdp | reconnect.backoff.ms = 50 14:29:26 policy-apex-pdp | request.timeout.ms = 30000 14:29:26 policy-apex-pdp | retry.backoff.ms = 100 14:29:26 policy-apex-pdp | sasl.client.callback.handler.class = null 14:29:26 policy-apex-pdp | sasl.jaas.config = null 14:29:26 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:29:26 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 14:29:26 policy-apex-pdp | sasl.kerberos.service.name = null 14:29:26 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 14:29:26 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 14:29:26 policy-apex-pdp | sasl.login.callback.handler.class = null 14:29:26 policy-apex-pdp | sasl.login.class = null 14:29:26 policy-apex-pdp | sasl.login.connect.timeout.ms = null 14:29:26 policy-apex-pdp | sasl.login.read.timeout.ms = null 14:29:26 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 14:29:26 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 14:29:26 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 14:29:26 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 14:29:26 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 14:29:26 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 14:29:26 policy-apex-pdp | sasl.mechanism = GSSAPI 14:29:26 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 14:29:26 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 14:29:26 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 14:29:26 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:29:26 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:29:26 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:29:26 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 14:29:26 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 14:29:26 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 14:29:26 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 14:29:26 policy-apex-pdp | security.protocol = PLAINTEXT 14:29:26 policy-apex-pdp | security.providers = null 14:29:26 policy-apex-pdp | send.buffer.bytes = 131072 14:29:26 policy-apex-pdp | session.timeout.ms = 45000 14:29:26 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 14:29:26 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 14:29:26 policy-apex-pdp | ssl.cipher.suites = null 14:29:26 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:29:26 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 14:29:26 policy-apex-pdp | ssl.engine.factory.class = null 14:29:26 policy-apex-pdp | ssl.key.password = null 14:29:26 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 14:29:26 policy-apex-pdp | ssl.keystore.certificate.chain = null 14:29:26 policy-apex-pdp | ssl.keystore.key = null 14:29:26 policy-apex-pdp | ssl.keystore.location = null 14:29:26 policy-apex-pdp | ssl.keystore.password = null 14:29:26 policy-apex-pdp | ssl.keystore.type = JKS 14:29:26 policy-apex-pdp | ssl.protocol = TLSv1.3 14:29:26 policy-apex-pdp | ssl.provider = null 14:29:26 policy-apex-pdp | ssl.secure.random.implementation = null 14:29:26 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 14:29:26 policy-apex-pdp | ssl.truststore.certificates = null 14:29:26 policy-apex-pdp | ssl.truststore.location = null 14:29:26 policy-apex-pdp | ssl.truststore.password = null 14:29:26 policy-apex-pdp | ssl.truststore.type = JKS 14:29:26 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:29:26 policy-apex-pdp | 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.104+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.104+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.104+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016844104 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.105+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Subscribed to topic(s): policy-pdp-pap 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.105+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=33684537-9e09-4f0c-843f-e073056f35f6, alive=false, publisher=null]]: starting 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.118+00:00|INFO|ProducerConfig|main] ProducerConfig values: 14:29:26 policy-apex-pdp | acks = -1 14:29:26 policy-apex-pdp | auto.include.jmx.reporter = true 14:29:26 policy-apex-pdp | batch.size = 16384 14:29:26 policy-apex-pdp | bootstrap.servers = [kafka:9092] 14:29:26 policy-apex-pdp | buffer.memory = 33554432 14:29:26 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 14:29:26 policy-apex-pdp | client.id = producer-1 14:29:26 policy-apex-pdp | compression.type = none 14:29:26 policy-apex-pdp | connections.max.idle.ms = 540000 14:29:26 policy-apex-pdp | delivery.timeout.ms = 120000 14:29:26 policy-apex-pdp | enable.idempotence = true 14:29:26 policy-apex-pdp | interceptor.classes = [] 14:29:26 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:29:26 policy-apex-pdp | linger.ms = 0 14:29:26 policy-apex-pdp | max.block.ms = 60000 14:29:26 policy-apex-pdp | max.in.flight.requests.per.connection = 5 14:29:26 policy-apex-pdp | max.request.size = 1048576 14:29:26 policy-apex-pdp | metadata.max.age.ms = 300000 14:29:26 policy-apex-pdp | metadata.max.idle.ms = 300000 14:29:26 policy-apex-pdp | metric.reporters = [] 14:29:26 policy-apex-pdp | metrics.num.samples = 2 14:29:26 policy-apex-pdp | metrics.recording.level = INFO 14:29:26 policy-apex-pdp | metrics.sample.window.ms = 30000 14:29:26 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 14:29:26 policy-apex-pdp | partitioner.availability.timeout.ms = 0 14:29:26 policy-apex-pdp | partitioner.class = null 14:29:26 policy-apex-pdp | partitioner.ignore.keys = false 14:29:26 policy-apex-pdp | receive.buffer.bytes = 32768 14:29:26 policy-apex-pdp | reconnect.backoff.max.ms = 1000 14:29:26 policy-apex-pdp | reconnect.backoff.ms = 50 14:29:26 policy-apex-pdp | request.timeout.ms = 30000 14:29:26 policy-apex-pdp | retries = 2147483647 14:29:26 policy-apex-pdp | retry.backoff.ms = 100 14:29:26 policy-apex-pdp | sasl.client.callback.handler.class = null 14:29:26 policy-apex-pdp | sasl.jaas.config = null 14:29:26 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:29:26 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 14:29:26 policy-apex-pdp | sasl.kerberos.service.name = null 14:29:26 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 14:29:26 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 14:29:26 policy-apex-pdp | sasl.login.callback.handler.class = null 14:29:26 policy-apex-pdp | sasl.login.class = null 14:29:26 policy-apex-pdp | sasl.login.connect.timeout.ms = null 14:29:26 policy-apex-pdp | sasl.login.read.timeout.ms = null 14:29:26 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 14:29:26 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 14:29:26 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 14:29:26 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 14:29:26 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 14:29:26 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 14:29:26 policy-apex-pdp | sasl.mechanism = GSSAPI 14:29:26 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 14:29:26 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 14:29:26 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 14:29:26 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:29:26 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:29:26 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:29:26 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 14:29:26 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 14:29:26 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 14:29:26 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 14:29:26 policy-apex-pdp | security.protocol = PLAINTEXT 14:29:26 policy-apex-pdp | security.providers = null 14:29:26 policy-apex-pdp | send.buffer.bytes = 131072 14:29:26 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 14:29:26 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 14:29:26 policy-apex-pdp | ssl.cipher.suites = null 14:29:26 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:29:26 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 14:29:26 policy-apex-pdp | ssl.engine.factory.class = null 14:29:26 policy-apex-pdp | ssl.key.password = null 14:29:26 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 14:29:26 policy-apex-pdp | ssl.keystore.certificate.chain = null 14:29:26 policy-apex-pdp | ssl.keystore.key = null 14:29:26 policy-apex-pdp | ssl.keystore.location = null 14:29:26 policy-apex-pdp | ssl.keystore.password = null 14:29:26 policy-apex-pdp | ssl.keystore.type = JKS 14:29:26 policy-apex-pdp | ssl.protocol = TLSv1.3 14:29:26 policy-apex-pdp | ssl.provider = null 14:29:26 policy-apex-pdp | ssl.secure.random.implementation = null 14:29:26 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 14:29:26 policy-apex-pdp | ssl.truststore.certificates = null 14:29:26 policy-apex-pdp | ssl.truststore.location = null 14:29:26 policy-apex-pdp | ssl.truststore.password = null 14:29:26 policy-apex-pdp | ssl.truststore.type = JKS 14:29:26 policy-apex-pdp | transaction.timeout.ms = 60000 14:29:26 policy-apex-pdp | transactional.id = null 14:29:26 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:29:26 policy-apex-pdp | 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.132+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.148+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.148+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.148+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016844148 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.149+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=33684537-9e09-4f0c-843f-e073056f35f6, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.149+00:00|INFO|ServiceManager|main] service manager starting set alive 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.149+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.152+00:00|INFO|ServiceManager|main] service manager starting topic sinks 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.152+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.154+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.154+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.154+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.154+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d8b2b84c-6638-4843-9df5-de6a0e09886f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.154+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d8b2b84c-6638-4843-9df5-de6a0e09886f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.154+00:00|INFO|ServiceManager|main] service manager starting Create REST server 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.171+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 14:29:26 policy-apex-pdp | [] 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.190+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 14:29:26 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ebbab027-f2a3-403a-acca-49e0bd29cb33","timestampMs":1720016844156,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.401+00:00|INFO|ServiceManager|main] service manager starting Rest Server 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.402+00:00|INFO|ServiceManager|main] service manager starting 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.402+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.402+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@1ac85b0c{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3dd69f5a{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.425+00:00|INFO|ServiceManager|main] service manager started 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.425+00:00|INFO|ServiceManager|main] service manager started 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.426+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.434+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@1ac85b0c{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3dd69f5a{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.543+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: oH0-pHXbT_qkvD-L4l6e0Q 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.546+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.544+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Cluster ID: oH0-pHXbT_qkvD-L4l6e0Q 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.552+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.567+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] (Re-)joining group 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.583+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Request joining group due to: need to re-join with the given member-id: consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2-02e1bbb6-69ac-41b2-aef0-bbd52b0c50ab 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.584+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 14:29:26 policy-apex-pdp | [2024-07-03T14:27:24.584+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] (Re-)joining group 14:29:26 policy-apex-pdp | [2024-07-03T14:27:25.091+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 14:29:26 policy-apex-pdp | [2024-07-03T14:27:25.093+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 14:29:26 policy-apex-pdp | [2024-07-03T14:27:25.235+00:00|INFO|RequestLog|qtp739264372-32] 172.17.0.1 - - [03/Jul/2024:14:27:25 +0000] "GET / HTTP/1.1" 401 495 "-" "curl/7.58.0" 14:29:26 policy-apex-pdp | [2024-07-03T14:27:27.588+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Successfully joined group with generation Generation{generationId=1, memberId='consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2-02e1bbb6-69ac-41b2-aef0-bbd52b0c50ab', protocol='range'} 14:29:26 policy-apex-pdp | [2024-07-03T14:27:27.597+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Finished assignment for group at generation 1: {consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2-02e1bbb6-69ac-41b2-aef0-bbd52b0c50ab=Assignment(partitions=[policy-pdp-pap-0])} 14:29:26 policy-apex-pdp | [2024-07-03T14:27:27.603+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Successfully synced group in generation Generation{generationId=1, memberId='consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2-02e1bbb6-69ac-41b2-aef0-bbd52b0c50ab', protocol='range'} 14:29:26 policy-apex-pdp | [2024-07-03T14:27:27.604+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 14:29:26 policy-apex-pdp | [2024-07-03T14:27:27.605+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Adding newly assigned partitions: policy-pdp-pap-0 14:29:26 policy-apex-pdp | [2024-07-03T14:27:27.611+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Found no committed offset for partition policy-pdp-pap-0 14:29:26 policy-apex-pdp | [2024-07-03T14:27:27.621+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d8b2b84c-6638-4843-9df5-de6a0e09886f-2, groupId=d8b2b84c-6638-4843-9df5-de6a0e09886f] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.154+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 14:29:26 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"1b57ba60-4d67-444e-93a3-91747d2da0e8","timestampMs":1720016864154,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.176+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:29:26 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"1b57ba60-4d67-444e-93a3-91747d2da0e8","timestampMs":1720016864154,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.179+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.331+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:29:26 policy-apex-pdp | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"99adbbf5-9c0b-4530-9ab3-1c88b54e568b","timestampMs":1720016864262,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.344+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.344+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 14:29:26 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"846177ab-c494-4a97-a796-e73ca11e4459","timestampMs":1720016864344,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.348+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 14:29:26 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"99adbbf5-9c0b-4530-9ab3-1c88b54e568b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"af09b611-5e53-47d2-baaa-12d8df2d805b","timestampMs":1720016864348,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.365+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:29:26 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"846177ab-c494-4a97-a796-e73ca11e4459","timestampMs":1720016864344,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.365+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.368+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:29:26 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"99adbbf5-9c0b-4530-9ab3-1c88b54e568b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"af09b611-5e53-47d2-baaa-12d8df2d805b","timestampMs":1720016864348,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.368+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.396+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:29:26 policy-apex-pdp | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"eaa7f9bc-5e08-4e39-9676-3218fb3ee976","timestampMs":1720016864263,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.398+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 14:29:26 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"eaa7f9bc-5e08-4e39-9676-3218fb3ee976","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"04f8172c-34b6-4a87-af94-498da42764fc","timestampMs":1720016864398,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.409+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:29:26 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"eaa7f9bc-5e08-4e39-9676-3218fb3ee976","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"04f8172c-34b6-4a87-af94-498da42764fc","timestampMs":1720016864398,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.409+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.432+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:29:26 policy-apex-pdp | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7cf59cc2-d930-4715-b62d-a6f327b4fadd","timestampMs":1720016864403,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.434+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 14:29:26 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7cf59cc2-d930-4715-b62d-a6f327b4fadd","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ace136ca-8f8d-4044-a45c-5215fb17ad51","timestampMs":1720016864434,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.442+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:29:26 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7cf59cc2-d930-4715-b62d-a6f327b4fadd","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ace136ca-8f8d-4044-a45c-5215fb17ad51","timestampMs":1720016864434,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-apex-pdp | [2024-07-03T14:27:44.443+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:29:26 policy-apex-pdp | [2024-07-03T14:27:45.288+00:00|INFO|RequestLog|qtp739264372-31] 172.17.0.1 - policyadmin [03/Jul/2024:14:27:45 +0000] "GET /policy/apex-pdp/v1/healthcheck HTTP/1.1" 200 109 "-" "curl/7.58.0" 14:29:26 policy-apex-pdp | [2024-07-03T14:27:56.087+00:00|INFO|RequestLog|qtp739264372-30] 172.17.0.5 - policyadmin [03/Jul/2024:14:27:56 +0000] "GET /metrics HTTP/1.1" 200 10651 "-" "Prometheus/2.53.0" 14:29:26 policy-apex-pdp | [2024-07-03T14:28:17.222+00:00|INFO|RequestLog|qtp739264372-26] 172.17.0.6 - policyadmin [03/Jul/2024:14:28:17 +0000] "GET /policy/apex-pdp/v1/healthcheck?null HTTP/1.1" 200 109 "-" "python-requests/2.32.3" 14:29:26 policy-apex-pdp | [2024-07-03T14:28:18.769+00:00|INFO|RequestLog|qtp739264372-27] 172.17.0.6 - policyadmin [03/Jul/2024:14:28:18 +0000] "GET /metrics?null HTTP/1.1" 200 11010 "-" "python-requests/2.32.3" 14:29:26 policy-apex-pdp | [2024-07-03T14:28:18.791+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.6 - policyadmin [03/Jul/2024:14:28:18 +0000] "GET /policy/apex-pdp/v1/healthcheck?null HTTP/1.1" 200 109 "-" "python-requests/2.32.3" 14:29:26 policy-apex-pdp | [2024-07-03T14:28:56.078+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.5 - policyadmin [03/Jul/2024:14:28:56 +0000] "GET /metrics HTTP/1.1" 200 10651 "-" "Prometheus/2.53.0" 14:29:26 =================================== 14:29:26 ======== Logs from api ======== 14:29:26 policy-api | Waiting for mariadb port 3306... 14:29:26 policy-api | mariadb (172.17.0.3:3306) open 14:29:26 policy-api | Waiting for policy-db-migrator port 6824... 14:29:26 policy-api | policy-db-migrator (172.17.0.6:6824) open 14:29:26 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 14:29:26 policy-api | 14:29:26 policy-api | . ____ _ __ _ _ 14:29:26 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 14:29:26 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 14:29:26 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 14:29:26 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 14:29:26 policy-api | =========|_|==============|___/=/_/_/_/ 14:29:26 policy-api | :: Spring Boot :: (v3.1.10) 14:29:26 policy-api | 14:29:26 policy-api | [2024-07-03T14:27:02.164+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 14:29:26 policy-api | [2024-07-03T14:27:02.228+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 20 (/app/api.jar started by policy in /opt/app/policy/api/bin) 14:29:26 policy-api | [2024-07-03T14:27:02.229+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 14:29:26 policy-api | [2024-07-03T14:27:04.184+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 14:29:26 policy-api | [2024-07-03T14:27:04.381+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 186 ms. Found 6 JPA repository interfaces. 14:29:26 policy-api | [2024-07-03T14:27:05.166+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 14:29:26 policy-api | [2024-07-03T14:27:05.177+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 14:29:26 policy-api | [2024-07-03T14:27:05.179+00:00|INFO|StandardService|main] Starting service [Tomcat] 14:29:26 policy-api | [2024-07-03T14:27:05.179+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 14:29:26 policy-api | [2024-07-03T14:27:05.275+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 14:29:26 policy-api | [2024-07-03T14:27:05.275+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2971 ms 14:29:26 policy-api | [2024-07-03T14:27:05.610+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 14:29:26 policy-api | [2024-07-03T14:27:05.678+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final 14:29:26 policy-api | [2024-07-03T14:27:05.725+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 14:29:26 policy-api | [2024-07-03T14:27:06.030+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 14:29:26 policy-api | [2024-07-03T14:27:06.063+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 14:29:26 policy-api | [2024-07-03T14:27:06.176+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@67b100fe 14:29:26 policy-api | [2024-07-03T14:27:06.179+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 14:29:26 policy-api | [2024-07-03T14:27:08.164+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 14:29:26 policy-api | [2024-07-03T14:27:08.167+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 14:29:26 policy-api | [2024-07-03T14:27:08.862+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 14:29:26 policy-api | [2024-07-03T14:27:09.647+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 14:29:26 policy-api | [2024-07-03T14:27:10.721+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 14:29:26 policy-api | [2024-07-03T14:27:10.885+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@7ce299c6, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@278e5f8e, org.springframework.security.web.context.SecurityContextHolderFilter@93231b2, org.springframework.security.web.header.HeaderWriterFilter@37264d08, org.springframework.security.web.authentication.logout.LogoutFilter@112188cc, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@457512b, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@65d5de1a, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@44d51c85, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@44a3b4d9, org.springframework.security.web.access.ExceptionTranslationFilter@59fe8d94, org.springframework.security.web.access.intercept.AuthorizationFilter@4e42beba] 14:29:26 policy-api | [2024-07-03T14:27:11.542+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 14:29:26 policy-api | [2024-07-03T14:27:11.626+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 14:29:26 policy-api | [2024-07-03T14:27:11.647+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 14:29:26 policy-api | [2024-07-03T14:27:11.666+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.188 seconds (process running for 10.802) 14:29:26 policy-api | [2024-07-03T14:27:39.922+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 14:29:26 policy-api | [2024-07-03T14:27:39.922+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 14:29:26 policy-api | [2024-07-03T14:27:39.923+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 14:29:26 policy-api | [2024-07-03T14:28:17.410+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: 14:29:26 policy-api | [] 14:29:26 =================================== 14:29:26 ======== Logs from csit-tests ======== 14:29:26 policy-csit | Invoking the robot tests from: apex-pdp-test.robot apex-slas.robot 14:29:26 policy-csit | Run Robot test 14:29:26 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 14:29:26 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 14:29:26 policy-csit | -v POLICY_API_IP:policy-api:6969 14:29:26 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 14:29:26 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 14:29:26 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 14:29:26 policy-csit | -v APEX_IP:policy-apex-pdp:6969 14:29:26 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 14:29:26 policy-csit | -v KAFKA_IP:kafka:9092 14:29:26 policy-csit | -v PROMETHEUS_IP:prometheus:9090 14:29:26 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 14:29:26 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 14:29:26 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 14:29:26 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 14:29:26 policy-csit | -v TEMP_FOLDER:/tmp/distribution 14:29:26 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 14:29:26 policy-csit | -v TEST_ENV: 14:29:26 policy-csit | -v JAEGER_IP:jaeger:16686 14:29:26 policy-csit | Starting Robot test suites ... 14:29:26 policy-csit | ============================================================================== 14:29:26 policy-csit | Apex-Pdp-Test & Apex-Slas 14:29:26 policy-csit | ============================================================================== 14:29:26 policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Pdp-Test 14:29:26 policy-csit | ============================================================================== 14:29:26 policy-csit | Healthcheck :: Runs Apex PDP Health check | PASS | 14:29:26 policy-csit | ------------------------------------------------------------------------------ 14:29:26 policy-csit | ExecuteApexSampleDomainPolicy | FAIL | 14:29:26 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 14:29:26 policy-csit | ------------------------------------------------------------------------------ 14:29:26 policy-csit | ExecuteApexTestPnfPolicy | FAIL | 14:29:26 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 14:29:26 policy-csit | ------------------------------------------------------------------------------ 14:29:26 policy-csit | ExecuteApexTestPnfPolicyWithMetadataSet | FAIL | 14:29:26 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 14:29:26 policy-csit | ------------------------------------------------------------------------------ 14:29:26 policy-csit | Metrics :: Verify policy-apex-pdp is exporting prometheus metrics | FAIL | 14:29:26 policy-csit | '# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds. 14:29:26 policy-csit | # TYPE process_cpu_seconds_total counter 14:29:26 policy-csit | process_cpu_seconds_total 8.34 14:29:26 policy-csit | # HELP process_start_time_seconds Start time of the process since unix epoch in seconds. 14:29:26 policy-csit | # TYPE process_start_time_seconds gauge 14:29:26 policy-csit | process_start_time_seconds 1.720016842817E9 14:29:26 policy-csit | # HELP process_open_fds Number of open file descriptors. 14:29:26 policy-csit | # TYPE process_open_fds gauge 14:29:26 policy-csit | process_open_fds 387.0 14:29:26 policy-csit | # HELP process_max_fds Maximum number of open file descriptors. 14:29:26 policy-csit | # TYPE process_max_fds gauge 14:29:26 policy-csit | process_max_fds 1048576.0 14:29:26 policy-csit | # HELP process_virtual_memory_bytes Virtual memory size in bytes. 14:29:26 policy-csit | # TYPE process_virtual_memory_bytes gauge 14:29:26 policy-csit | process_virtual_memory_bytes 1.0461679616E10 14:29:26 policy-csit | # HELP process_resident_memory_bytes Resident memory size in bytes. 14:29:26 policy-csit | # TYPE process_resident_memory_bytes gauge 14:29:26 policy-csit | process_resident_memory_bytes 1.99868416E8 14:29:26 policy-csit | [ Message content over the limit has been removed. ] 14:29:26 policy-csit | # TYPE pdpa_policy_deployments_total counter 14:29:26 policy-csit | # HELP jvm_memory_pool_allocated_bytes_created Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. 14:29:26 policy-csit | # TYPE jvm_memory_pool_allocated_bytes_created gauge 14:29:26 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'profiled nmethods'",} 1.720016844472E9 14:29:26 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Old Gen",} 1.720016844501E9 14:29:26 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Eden Space",} 1.720016844501E9 14:29:26 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-profiled nmethods'",} 1.720016844501E9 14:29:26 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="G1 Survivor Space",} 1.720016844501E9 14:29:26 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="Compressed Class Space",} 1.720016844501E9 14:29:26 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="Metaspace",} 1.720016844501E9 14:29:26 policy-csit | jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-nmethods'",} 1.720016844501E9 14:29:26 policy-csit | ' does not contain 'pdpa_policy_deployments_total{operation="deploy",status="TOTAL",} 3.0' 14:29:26 policy-csit | ------------------------------------------------------------------------------ 14:29:26 policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Pdp-Test | FAIL | 14:29:26 policy-csit | 5 tests, 1 passed, 4 failed 14:29:26 policy-csit | ============================================================================== 14:29:26 policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Slas 14:29:26 policy-csit | ============================================================================== 14:29:26 policy-csit | Healthcheck :: Runs Apex PDP Health check | PASS | 14:29:26 policy-csit | ------------------------------------------------------------------------------ 14:29:26 policy-csit | ValidatePolicyExecutionAndEventRateLowComplexity :: Validate that ... | FAIL | 14:29:26 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 14:29:26 policy-csit | ------------------------------------------------------------------------------ 14:29:26 policy-csit | ValidatePolicyExecutionAndEventRateModerateComplexity :: Validate ... | FAIL | 14:29:26 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 14:29:26 policy-csit | ------------------------------------------------------------------------------ 14:29:26 policy-csit | ValidatePolicyExecutionAndEventRateHighComplexity :: Validate that... | FAIL | 14:29:26 policy-csit | Url: http://policy-api:6969/policy/api/v1/policytypes/onap.policies.native.Apex/versions/1.0.0/policies?null Expected status: 201 != 200 14:29:26 policy-csit | ------------------------------------------------------------------------------ 14:29:26 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 14:29:26 policy-csit | ------------------------------------------------------------------------------ 14:29:26 policy-csit | ValidatePolicyExecutionTimes :: Validate policy execution times us... | FAIL | 14:29:26 policy-csit | Resolving variable '${resp['data']['result'][0]['value'][1]}' failed: IndexError: list index out of range 14:29:26 policy-csit | ------------------------------------------------------------------------------ 14:29:26 policy-csit | Apex-Pdp-Test & Apex-Slas.Apex-Slas | FAIL | 14:29:26 policy-csit | 6 tests, 2 passed, 4 failed 14:29:26 policy-csit | ============================================================================== 14:29:26 policy-csit | Apex-Pdp-Test & Apex-Slas | FAIL | 14:29:26 policy-csit | 11 tests, 3 passed, 8 failed 14:29:26 policy-csit | ============================================================================== 14:29:26 policy-csit | Output: /tmp/results/output.xml 14:29:26 policy-csit | Log: /tmp/results/log.html 14:29:26 policy-csit | Report: /tmp/results/report.html 14:29:26 policy-csit | RESULT: 8 14:29:26 =================================== 14:29:26 ======== Logs from policy-db-migrator ======== 14:29:26 policy-db-migrator | Waiting for mariadb port 3306... 14:29:26 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:29:26 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:29:26 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:29:26 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:29:26 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:29:26 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:29:26 policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! 14:29:26 policy-db-migrator | 321 blocks 14:29:26 policy-db-migrator | Preparing upgrade release version: 0800 14:29:26 policy-db-migrator | Preparing upgrade release version: 0900 14:29:26 policy-db-migrator | Preparing upgrade release version: 1000 14:29:26 policy-db-migrator | Preparing upgrade release version: 1100 14:29:26 policy-db-migrator | Preparing upgrade release version: 1200 14:29:26 policy-db-migrator | Preparing upgrade release version: 1300 14:29:26 policy-db-migrator | Done 14:29:26 policy-db-migrator | name version 14:29:26 policy-db-migrator | policyadmin 0 14:29:26 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 14:29:26 policy-db-migrator | upgrade: 0 -> 1300 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0450-pdpgroup.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0470-pdp.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0570-toscadatatype.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0630-toscanodetype.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0660-toscaparameter.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0670-toscapolicies.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0690-toscapolicy.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0730-toscaproperty.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0770-toscarequirement.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0780-toscarequirements.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0820-toscatrigger.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0100-pdp.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 14:29:26 policy-db-migrator | JOIN pdpstatistics b 14:29:26 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 14:29:26 policy-db-migrator | SET a.id = b.id 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0210-sequence.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0220-sequence.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0120-toscatrigger.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0140-toscaparameter.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0150-toscaproperty.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0100-upgrade.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | select 'upgrade to 1100 completed' as msg 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | msg 14:29:26 policy-db-migrator | upgrade to 1100 completed 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0120-audit_sequence.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | TRUNCATE TABLE sequence 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | DROP TABLE pdpstatistics 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | DROP TABLE statistics_sequence 14:29:26 policy-db-migrator | -------------- 14:29:26 policy-db-migrator | 14:29:26 policy-db-migrator | policyadmin: OK: upgrade (1300) 14:29:26 policy-db-migrator | name version 14:29:26 policy-db-migrator | policyadmin 1300 14:29:26 policy-db-migrator | ID script operation from_version to_version tag success atTime 14:29:26 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 14:29:26 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 14:29:26 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 14:29:26 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 14:29:26 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 14:29:26 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 14:29:26 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 14:29:26 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 14:29:26 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:54 14:29:26 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:55 14:29:26 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:56 14:29:26 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:57 14:29:26 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 14:29:26 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 14:29:26 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 14:29:26 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 14:29:26 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 14:29:26 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 14:29:26 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 14:29:26 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 14:29:26 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 14:29:26 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 14:29:26 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 14:29:26 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 14:29:26 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 14:29:26 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 14:29:26 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0307241426540800u 1 2024-07-03 14:26:58 14:29:26 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:58 14:29:26 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:58 14:29:26 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 0307241426540900u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 0307241426541000u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 0307241426541100u 1 2024-07-03 14:26:59 14:29:26 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 0307241426541200u 1 2024-07-03 14:27:00 14:29:26 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 0307241426541200u 1 2024-07-03 14:27:00 14:29:26 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 0307241426541200u 1 2024-07-03 14:27:00 14:29:26 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 0307241426541200u 1 2024-07-03 14:27:00 14:29:26 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 0307241426541300u 1 2024-07-03 14:27:00 14:29:26 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 0307241426541300u 1 2024-07-03 14:27:00 14:29:26 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 0307241426541300u 1 2024-07-03 14:27:00 14:29:26 policy-db-migrator | policyadmin: OK @ 1300 14:29:26 =================================== 14:29:26 ======== Logs from pap ======== 14:29:26 policy-pap | Waiting for mariadb port 3306... 14:29:26 policy-pap | mariadb (172.17.0.3:3306) open 14:29:26 policy-pap | Waiting for kafka port 9092... 14:29:26 policy-pap | kafka (172.17.0.8:9092) open 14:29:26 policy-pap | Waiting for api port 6969... 14:29:26 policy-pap | api (172.17.0.7:6969) open 14:29:26 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 14:29:26 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 14:29:26 policy-pap | 14:29:26 policy-pap | . ____ _ __ _ _ 14:29:26 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 14:29:26 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 14:29:26 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 14:29:26 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 14:29:26 policy-pap | =========|_|==============|___/=/_/_/_/ 14:29:26 policy-pap | :: Spring Boot :: (v3.1.10) 14:29:26 policy-pap | 14:29:26 policy-pap | [2024-07-03T14:27:13.989+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 14:29:26 policy-pap | [2024-07-03T14:27:14.046+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 30 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 14:29:26 policy-pap | [2024-07-03T14:27:14.047+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 14:29:26 policy-pap | [2024-07-03T14:27:16.108+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 14:29:26 policy-pap | [2024-07-03T14:27:16.206+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 88 ms. Found 7 JPA repository interfaces. 14:29:26 policy-pap | [2024-07-03T14:27:16.630+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 14:29:26 policy-pap | [2024-07-03T14:27:16.630+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 14:29:26 policy-pap | [2024-07-03T14:27:17.226+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 14:29:26 policy-pap | [2024-07-03T14:27:17.236+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 14:29:26 policy-pap | [2024-07-03T14:27:17.238+00:00|INFO|StandardService|main] Starting service [Tomcat] 14:29:26 policy-pap | [2024-07-03T14:27:17.238+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 14:29:26 policy-pap | [2024-07-03T14:27:17.325+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 14:29:26 policy-pap | [2024-07-03T14:27:17.325+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3214 ms 14:29:26 policy-pap | [2024-07-03T14:27:17.741+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 14:29:26 policy-pap | [2024-07-03T14:27:17.803+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final 14:29:26 policy-pap | [2024-07-03T14:27:18.145+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 14:29:26 policy-pap | [2024-07-03T14:27:18.255+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4ee5b2d9 14:29:26 policy-pap | [2024-07-03T14:27:18.258+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 14:29:26 policy-pap | [2024-07-03T14:27:18.286+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect 14:29:26 policy-pap | [2024-07-03T14:27:19.729+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 14:29:26 policy-pap | [2024-07-03T14:27:19.744+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 14:29:26 policy-pap | [2024-07-03T14:27:20.247+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 14:29:26 policy-pap | [2024-07-03T14:27:20.672+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 14:29:26 policy-pap | [2024-07-03T14:27:20.787+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 14:29:26 policy-pap | [2024-07-03T14:27:21.060+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:29:26 policy-pap | allow.auto.create.topics = true 14:29:26 policy-pap | auto.commit.interval.ms = 5000 14:29:26 policy-pap | auto.include.jmx.reporter = true 14:29:26 policy-pap | auto.offset.reset = latest 14:29:26 policy-pap | bootstrap.servers = [kafka:9092] 14:29:26 policy-pap | check.crcs = true 14:29:26 policy-pap | client.dns.lookup = use_all_dns_ips 14:29:26 policy-pap | client.id = consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-1 14:29:26 policy-pap | client.rack = 14:29:26 policy-pap | connections.max.idle.ms = 540000 14:29:26 policy-pap | default.api.timeout.ms = 60000 14:29:26 policy-pap | enable.auto.commit = true 14:29:26 policy-pap | exclude.internal.topics = true 14:29:26 policy-pap | fetch.max.bytes = 52428800 14:29:26 policy-pap | fetch.max.wait.ms = 500 14:29:26 policy-pap | fetch.min.bytes = 1 14:29:26 policy-pap | group.id = 98eef03c-6c97-41d2-b0d1-0e3fd148d393 14:29:26 policy-pap | group.instance.id = null 14:29:26 policy-pap | heartbeat.interval.ms = 3000 14:29:26 policy-pap | interceptor.classes = [] 14:29:26 policy-pap | internal.leave.group.on.close = true 14:29:26 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 14:29:26 policy-pap | isolation.level = read_uncommitted 14:29:26 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:29:26 policy-pap | max.partition.fetch.bytes = 1048576 14:29:26 policy-pap | max.poll.interval.ms = 300000 14:29:26 policy-pap | max.poll.records = 500 14:29:26 policy-pap | metadata.max.age.ms = 300000 14:29:26 policy-pap | metric.reporters = [] 14:29:26 policy-pap | metrics.num.samples = 2 14:29:26 policy-pap | metrics.recording.level = INFO 14:29:26 policy-pap | metrics.sample.window.ms = 30000 14:29:26 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:29:26 policy-pap | receive.buffer.bytes = 65536 14:29:26 policy-pap | reconnect.backoff.max.ms = 1000 14:29:26 policy-pap | reconnect.backoff.ms = 50 14:29:26 policy-pap | request.timeout.ms = 30000 14:29:26 policy-pap | retry.backoff.ms = 100 14:29:26 policy-pap | sasl.client.callback.handler.class = null 14:29:26 policy-pap | sasl.jaas.config = null 14:29:26 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:29:26 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:29:26 policy-pap | sasl.kerberos.service.name = null 14:29:26 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:29:26 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:29:26 policy-pap | sasl.login.callback.handler.class = null 14:29:26 policy-pap | sasl.login.class = null 14:29:26 policy-pap | sasl.login.connect.timeout.ms = null 14:29:26 policy-pap | sasl.login.read.timeout.ms = null 14:29:26 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:29:26 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:29:26 policy-pap | sasl.login.refresh.window.factor = 0.8 14:29:26 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:29:26 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:29:26 policy-pap | sasl.login.retry.backoff.ms = 100 14:29:26 policy-pap | sasl.mechanism = GSSAPI 14:29:26 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:29:26 policy-pap | sasl.oauthbearer.expected.audience = null 14:29:26 policy-pap | sasl.oauthbearer.expected.issuer = null 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:29:26 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:29:26 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:29:26 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:29:26 policy-pap | security.protocol = PLAINTEXT 14:29:26 policy-pap | security.providers = null 14:29:26 policy-pap | send.buffer.bytes = 131072 14:29:26 policy-pap | session.timeout.ms = 45000 14:29:26 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:29:26 policy-pap | socket.connection.setup.timeout.ms = 10000 14:29:26 policy-pap | ssl.cipher.suites = null 14:29:26 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:29:26 policy-pap | ssl.endpoint.identification.algorithm = https 14:29:26 policy-pap | ssl.engine.factory.class = null 14:29:26 policy-pap | ssl.key.password = null 14:29:26 policy-pap | ssl.keymanager.algorithm = SunX509 14:29:26 policy-pap | ssl.keystore.certificate.chain = null 14:29:26 policy-pap | ssl.keystore.key = null 14:29:26 policy-pap | ssl.keystore.location = null 14:29:26 policy-pap | ssl.keystore.password = null 14:29:26 policy-pap | ssl.keystore.type = JKS 14:29:26 policy-pap | ssl.protocol = TLSv1.3 14:29:26 policy-pap | ssl.provider = null 14:29:26 policy-pap | ssl.secure.random.implementation = null 14:29:26 policy-pap | ssl.trustmanager.algorithm = PKIX 14:29:26 policy-pap | ssl.truststore.certificates = null 14:29:26 policy-pap | ssl.truststore.location = null 14:29:26 policy-pap | ssl.truststore.password = null 14:29:26 policy-pap | ssl.truststore.type = JKS 14:29:26 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:29:26 policy-pap | 14:29:26 policy-pap | [2024-07-03T14:27:21.231+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:29:26 policy-pap | [2024-07-03T14:27:21.231+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:29:26 policy-pap | [2024-07-03T14:27:21.231+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016841229 14:29:26 policy-pap | [2024-07-03T14:27:21.233+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-1, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Subscribed to topic(s): policy-pdp-pap 14:29:26 policy-pap | [2024-07-03T14:27:21.234+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:29:26 policy-pap | allow.auto.create.topics = true 14:29:26 policy-pap | auto.commit.interval.ms = 5000 14:29:26 policy-pap | auto.include.jmx.reporter = true 14:29:26 policy-pap | auto.offset.reset = latest 14:29:26 policy-pap | bootstrap.servers = [kafka:9092] 14:29:26 policy-pap | check.crcs = true 14:29:26 policy-pap | client.dns.lookup = use_all_dns_ips 14:29:26 policy-pap | client.id = consumer-policy-pap-2 14:29:26 policy-pap | client.rack = 14:29:26 policy-pap | connections.max.idle.ms = 540000 14:29:26 policy-pap | default.api.timeout.ms = 60000 14:29:26 policy-pap | enable.auto.commit = true 14:29:26 policy-pap | exclude.internal.topics = true 14:29:26 policy-pap | fetch.max.bytes = 52428800 14:29:26 policy-pap | fetch.max.wait.ms = 500 14:29:26 policy-pap | fetch.min.bytes = 1 14:29:26 policy-pap | group.id = policy-pap 14:29:26 policy-pap | group.instance.id = null 14:29:26 policy-pap | heartbeat.interval.ms = 3000 14:29:26 policy-pap | interceptor.classes = [] 14:29:26 policy-pap | internal.leave.group.on.close = true 14:29:26 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 14:29:26 policy-pap | isolation.level = read_uncommitted 14:29:26 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:29:26 policy-pap | max.partition.fetch.bytes = 1048576 14:29:26 policy-pap | max.poll.interval.ms = 300000 14:29:26 policy-pap | max.poll.records = 500 14:29:26 policy-pap | metadata.max.age.ms = 300000 14:29:26 policy-pap | metric.reporters = [] 14:29:26 policy-pap | metrics.num.samples = 2 14:29:26 policy-pap | metrics.recording.level = INFO 14:29:26 policy-pap | metrics.sample.window.ms = 30000 14:29:26 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:29:26 policy-pap | receive.buffer.bytes = 65536 14:29:26 policy-pap | reconnect.backoff.max.ms = 1000 14:29:26 policy-pap | reconnect.backoff.ms = 50 14:29:26 policy-pap | request.timeout.ms = 30000 14:29:26 policy-pap | retry.backoff.ms = 100 14:29:26 policy-pap | sasl.client.callback.handler.class = null 14:29:26 policy-pap | sasl.jaas.config = null 14:29:26 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:29:26 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:29:26 policy-pap | sasl.kerberos.service.name = null 14:29:26 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:29:26 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:29:26 policy-pap | sasl.login.callback.handler.class = null 14:29:26 policy-pap | sasl.login.class = null 14:29:26 policy-pap | sasl.login.connect.timeout.ms = null 14:29:26 policy-pap | sasl.login.read.timeout.ms = null 14:29:26 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:29:26 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:29:26 policy-pap | sasl.login.refresh.window.factor = 0.8 14:29:26 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:29:26 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:29:26 policy-pap | sasl.login.retry.backoff.ms = 100 14:29:26 policy-pap | sasl.mechanism = GSSAPI 14:29:26 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:29:26 policy-pap | sasl.oauthbearer.expected.audience = null 14:29:26 policy-pap | sasl.oauthbearer.expected.issuer = null 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:29:26 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:29:26 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:29:26 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:29:26 policy-pap | security.protocol = PLAINTEXT 14:29:26 policy-pap | security.providers = null 14:29:26 policy-pap | send.buffer.bytes = 131072 14:29:26 policy-pap | session.timeout.ms = 45000 14:29:26 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:29:26 policy-pap | socket.connection.setup.timeout.ms = 10000 14:29:26 policy-pap | ssl.cipher.suites = null 14:29:26 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:29:26 policy-pap | ssl.endpoint.identification.algorithm = https 14:29:26 policy-pap | ssl.engine.factory.class = null 14:29:26 policy-pap | ssl.key.password = null 14:29:26 policy-pap | ssl.keymanager.algorithm = SunX509 14:29:26 policy-pap | ssl.keystore.certificate.chain = null 14:29:26 policy-pap | ssl.keystore.key = null 14:29:26 policy-pap | ssl.keystore.location = null 14:29:26 policy-pap | ssl.keystore.password = null 14:29:26 policy-pap | ssl.keystore.type = JKS 14:29:26 policy-pap | ssl.protocol = TLSv1.3 14:29:26 policy-pap | ssl.provider = null 14:29:26 policy-pap | ssl.secure.random.implementation = null 14:29:26 policy-pap | ssl.trustmanager.algorithm = PKIX 14:29:26 policy-pap | ssl.truststore.certificates = null 14:29:26 policy-pap | ssl.truststore.location = null 14:29:26 policy-pap | ssl.truststore.password = null 14:29:26 policy-pap | ssl.truststore.type = JKS 14:29:26 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:29:26 policy-pap | 14:29:26 policy-pap | [2024-07-03T14:27:21.240+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:29:26 policy-pap | [2024-07-03T14:27:21.240+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:29:26 policy-pap | [2024-07-03T14:27:21.240+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016841240 14:29:26 policy-pap | [2024-07-03T14:27:21.241+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 14:29:26 policy-pap | [2024-07-03T14:27:21.544+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 14:29:26 policy-pap | [2024-07-03T14:27:21.687+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 14:29:26 policy-pap | [2024-07-03T14:27:21.901+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@7cf66cf9, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@38f63756, org.springframework.security.web.context.SecurityContextHolderFilter@574f9e36, org.springframework.security.web.header.HeaderWriterFilter@70aa03c0, org.springframework.security.web.authentication.logout.LogoutFilter@37b80ec7, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@522f0bb8, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@60b4d934, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@41abee65, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3d7caf9c, org.springframework.security.web.access.ExceptionTranslationFilter@5ced0537, org.springframework.security.web.access.intercept.AuthorizationFilter@4c0930c4] 14:29:26 policy-pap | [2024-07-03T14:27:22.622+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 14:29:26 policy-pap | [2024-07-03T14:27:22.718+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 14:29:26 policy-pap | [2024-07-03T14:27:22.743+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 14:29:26 policy-pap | [2024-07-03T14:27:22.761+00:00|INFO|ServiceManager|main] Policy PAP starting 14:29:26 policy-pap | [2024-07-03T14:27:22.762+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 14:29:26 policy-pap | [2024-07-03T14:27:22.762+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 14:29:26 policy-pap | [2024-07-03T14:27:22.763+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 14:29:26 policy-pap | [2024-07-03T14:27:22.763+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 14:29:26 policy-pap | [2024-07-03T14:27:22.764+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 14:29:26 policy-pap | [2024-07-03T14:27:22.764+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 14:29:26 policy-pap | [2024-07-03T14:27:22.766+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=98eef03c-6c97-41d2-b0d1-0e3fd148d393, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4270705f 14:29:26 policy-pap | [2024-07-03T14:27:22.777+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=98eef03c-6c97-41d2-b0d1-0e3fd148d393, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:29:26 policy-pap | [2024-07-03T14:27:22.777+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:29:26 policy-pap | allow.auto.create.topics = true 14:29:26 policy-pap | auto.commit.interval.ms = 5000 14:29:26 policy-pap | auto.include.jmx.reporter = true 14:29:26 policy-pap | auto.offset.reset = latest 14:29:26 policy-pap | bootstrap.servers = [kafka:9092] 14:29:26 policy-pap | check.crcs = true 14:29:26 policy-pap | client.dns.lookup = use_all_dns_ips 14:29:26 policy-pap | client.id = consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3 14:29:26 policy-pap | client.rack = 14:29:26 policy-pap | connections.max.idle.ms = 540000 14:29:26 policy-pap | default.api.timeout.ms = 60000 14:29:26 policy-pap | enable.auto.commit = true 14:29:26 policy-pap | exclude.internal.topics = true 14:29:26 policy-pap | fetch.max.bytes = 52428800 14:29:26 policy-pap | fetch.max.wait.ms = 500 14:29:26 policy-pap | fetch.min.bytes = 1 14:29:26 policy-pap | group.id = 98eef03c-6c97-41d2-b0d1-0e3fd148d393 14:29:26 policy-pap | group.instance.id = null 14:29:26 policy-pap | heartbeat.interval.ms = 3000 14:29:26 policy-pap | interceptor.classes = [] 14:29:26 policy-pap | internal.leave.group.on.close = true 14:29:26 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 14:29:26 policy-pap | isolation.level = read_uncommitted 14:29:26 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:29:26 policy-pap | max.partition.fetch.bytes = 1048576 14:29:26 policy-pap | max.poll.interval.ms = 300000 14:29:26 policy-pap | max.poll.records = 500 14:29:26 policy-pap | metadata.max.age.ms = 300000 14:29:26 policy-pap | metric.reporters = [] 14:29:26 policy-pap | metrics.num.samples = 2 14:29:26 policy-pap | metrics.recording.level = INFO 14:29:26 policy-pap | metrics.sample.window.ms = 30000 14:29:26 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:29:26 policy-pap | receive.buffer.bytes = 65536 14:29:26 policy-pap | reconnect.backoff.max.ms = 1000 14:29:26 policy-pap | reconnect.backoff.ms = 50 14:29:26 policy-pap | request.timeout.ms = 30000 14:29:26 policy-pap | retry.backoff.ms = 100 14:29:26 policy-pap | sasl.client.callback.handler.class = null 14:29:26 policy-pap | sasl.jaas.config = null 14:29:26 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:29:26 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:29:26 policy-pap | sasl.kerberos.service.name = null 14:29:26 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:29:26 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:29:26 policy-pap | sasl.login.callback.handler.class = null 14:29:26 policy-pap | sasl.login.class = null 14:29:26 policy-pap | sasl.login.connect.timeout.ms = null 14:29:26 policy-pap | sasl.login.read.timeout.ms = null 14:29:26 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:29:26 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:29:26 policy-pap | sasl.login.refresh.window.factor = 0.8 14:29:26 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:29:26 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:29:26 policy-pap | sasl.login.retry.backoff.ms = 100 14:29:26 policy-pap | sasl.mechanism = GSSAPI 14:29:26 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:29:26 policy-pap | sasl.oauthbearer.expected.audience = null 14:29:26 policy-pap | sasl.oauthbearer.expected.issuer = null 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:29:26 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:29:26 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:29:26 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:29:26 policy-pap | security.protocol = PLAINTEXT 14:29:26 policy-pap | security.providers = null 14:29:26 policy-pap | send.buffer.bytes = 131072 14:29:26 policy-pap | session.timeout.ms = 45000 14:29:26 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:29:26 policy-pap | socket.connection.setup.timeout.ms = 10000 14:29:26 policy-pap | ssl.cipher.suites = null 14:29:26 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:29:26 policy-pap | ssl.endpoint.identification.algorithm = https 14:29:26 policy-pap | ssl.engine.factory.class = null 14:29:26 policy-pap | ssl.key.password = null 14:29:26 policy-pap | ssl.keymanager.algorithm = SunX509 14:29:26 policy-pap | ssl.keystore.certificate.chain = null 14:29:26 policy-pap | ssl.keystore.key = null 14:29:26 policy-pap | ssl.keystore.location = null 14:29:26 policy-pap | ssl.keystore.password = null 14:29:26 policy-pap | ssl.keystore.type = JKS 14:29:26 policy-pap | ssl.protocol = TLSv1.3 14:29:26 policy-pap | ssl.provider = null 14:29:26 policy-pap | ssl.secure.random.implementation = null 14:29:26 policy-pap | ssl.trustmanager.algorithm = PKIX 14:29:26 policy-pap | ssl.truststore.certificates = null 14:29:26 policy-pap | ssl.truststore.location = null 14:29:26 policy-pap | ssl.truststore.password = null 14:29:26 policy-pap | ssl.truststore.type = JKS 14:29:26 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:29:26 policy-pap | 14:29:26 policy-pap | [2024-07-03T14:27:22.783+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:29:26 policy-pap | [2024-07-03T14:27:22.783+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:29:26 policy-pap | [2024-07-03T14:27:22.784+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016842783 14:29:26 policy-pap | [2024-07-03T14:27:22.784+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Subscribed to topic(s): policy-pdp-pap 14:29:26 policy-pap | [2024-07-03T14:27:22.786+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 14:29:26 policy-pap | [2024-07-03T14:27:22.786+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=8195bf70-75c1-45e7-9426-15be720361af, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@18bf1bad 14:29:26 policy-pap | [2024-07-03T14:27:22.786+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=8195bf70-75c1-45e7-9426-15be720361af, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:29:26 policy-pap | [2024-07-03T14:27:22.786+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:29:26 policy-pap | allow.auto.create.topics = true 14:29:26 policy-pap | auto.commit.interval.ms = 5000 14:29:26 policy-pap | auto.include.jmx.reporter = true 14:29:26 policy-pap | auto.offset.reset = latest 14:29:26 policy-pap | bootstrap.servers = [kafka:9092] 14:29:26 policy-pap | check.crcs = true 14:29:26 policy-pap | client.dns.lookup = use_all_dns_ips 14:29:26 policy-pap | client.id = consumer-policy-pap-4 14:29:26 policy-pap | client.rack = 14:29:26 policy-pap | connections.max.idle.ms = 540000 14:29:26 policy-pap | default.api.timeout.ms = 60000 14:29:26 policy-pap | enable.auto.commit = true 14:29:26 policy-pap | exclude.internal.topics = true 14:29:26 policy-pap | fetch.max.bytes = 52428800 14:29:26 policy-pap | fetch.max.wait.ms = 500 14:29:26 policy-pap | fetch.min.bytes = 1 14:29:26 policy-pap | group.id = policy-pap 14:29:26 policy-pap | group.instance.id = null 14:29:26 policy-pap | heartbeat.interval.ms = 3000 14:29:26 policy-pap | interceptor.classes = [] 14:29:26 policy-pap | internal.leave.group.on.close = true 14:29:26 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 14:29:26 policy-pap | isolation.level = read_uncommitted 14:29:26 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:29:26 policy-pap | max.partition.fetch.bytes = 1048576 14:29:26 policy-pap | max.poll.interval.ms = 300000 14:29:26 policy-pap | max.poll.records = 500 14:29:26 policy-pap | metadata.max.age.ms = 300000 14:29:26 policy-pap | metric.reporters = [] 14:29:26 policy-pap | metrics.num.samples = 2 14:29:26 policy-pap | metrics.recording.level = INFO 14:29:26 policy-pap | metrics.sample.window.ms = 30000 14:29:26 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:29:26 policy-pap | receive.buffer.bytes = 65536 14:29:26 policy-pap | reconnect.backoff.max.ms = 1000 14:29:26 policy-pap | reconnect.backoff.ms = 50 14:29:26 policy-pap | request.timeout.ms = 30000 14:29:26 policy-pap | retry.backoff.ms = 100 14:29:26 policy-pap | sasl.client.callback.handler.class = null 14:29:26 policy-pap | sasl.jaas.config = null 14:29:26 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:29:26 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:29:26 policy-pap | sasl.kerberos.service.name = null 14:29:26 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:29:26 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:29:26 policy-pap | sasl.login.callback.handler.class = null 14:29:26 policy-pap | sasl.login.class = null 14:29:26 policy-pap | sasl.login.connect.timeout.ms = null 14:29:26 policy-pap | sasl.login.read.timeout.ms = null 14:29:26 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:29:26 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:29:26 policy-pap | sasl.login.refresh.window.factor = 0.8 14:29:26 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:29:26 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:29:26 policy-pap | sasl.login.retry.backoff.ms = 100 14:29:26 policy-pap | sasl.mechanism = GSSAPI 14:29:26 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:29:26 policy-pap | sasl.oauthbearer.expected.audience = null 14:29:26 policy-pap | sasl.oauthbearer.expected.issuer = null 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:29:26 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:29:26 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:29:26 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:29:26 policy-pap | security.protocol = PLAINTEXT 14:29:26 policy-pap | security.providers = null 14:29:26 policy-pap | send.buffer.bytes = 131072 14:29:26 policy-pap | session.timeout.ms = 45000 14:29:26 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:29:26 policy-pap | socket.connection.setup.timeout.ms = 10000 14:29:26 policy-pap | ssl.cipher.suites = null 14:29:26 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:29:26 policy-pap | ssl.endpoint.identification.algorithm = https 14:29:26 policy-pap | ssl.engine.factory.class = null 14:29:26 policy-pap | ssl.key.password = null 14:29:26 policy-pap | ssl.keymanager.algorithm = SunX509 14:29:26 policy-pap | ssl.keystore.certificate.chain = null 14:29:26 policy-pap | ssl.keystore.key = null 14:29:26 policy-pap | ssl.keystore.location = null 14:29:26 policy-pap | ssl.keystore.password = null 14:29:26 policy-pap | ssl.keystore.type = JKS 14:29:26 policy-pap | ssl.protocol = TLSv1.3 14:29:26 policy-pap | ssl.provider = null 14:29:26 policy-pap | ssl.secure.random.implementation = null 14:29:26 policy-pap | ssl.trustmanager.algorithm = PKIX 14:29:26 policy-pap | ssl.truststore.certificates = null 14:29:26 policy-pap | ssl.truststore.location = null 14:29:26 policy-pap | ssl.truststore.password = null 14:29:26 policy-pap | ssl.truststore.type = JKS 14:29:26 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:29:26 policy-pap | 14:29:26 policy-pap | [2024-07-03T14:27:22.792+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:29:26 policy-pap | [2024-07-03T14:27:22.792+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:29:26 policy-pap | [2024-07-03T14:27:22.792+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016842792 14:29:26 policy-pap | [2024-07-03T14:27:22.792+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 14:29:26 policy-pap | [2024-07-03T14:27:22.793+00:00|INFO|ServiceManager|main] Policy PAP starting topics 14:29:26 policy-pap | [2024-07-03T14:27:22.793+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=8195bf70-75c1-45e7-9426-15be720361af, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:29:26 policy-pap | [2024-07-03T14:27:22.793+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=98eef03c-6c97-41d2-b0d1-0e3fd148d393, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:29:26 policy-pap | [2024-07-03T14:27:22.793+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6a1e44dd-949d-4040-bea9-ddb1f234f1b4, alive=false, publisher=null]]: starting 14:29:26 policy-pap | [2024-07-03T14:27:22.812+00:00|INFO|ProducerConfig|main] ProducerConfig values: 14:29:26 policy-pap | acks = -1 14:29:26 policy-pap | auto.include.jmx.reporter = true 14:29:26 policy-pap | batch.size = 16384 14:29:26 policy-pap | bootstrap.servers = [kafka:9092] 14:29:26 policy-pap | buffer.memory = 33554432 14:29:26 policy-pap | client.dns.lookup = use_all_dns_ips 14:29:26 policy-pap | client.id = producer-1 14:29:26 policy-pap | compression.type = none 14:29:26 policy-pap | connections.max.idle.ms = 540000 14:29:26 policy-pap | delivery.timeout.ms = 120000 14:29:26 policy-pap | enable.idempotence = true 14:29:26 policy-pap | interceptor.classes = [] 14:29:26 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:29:26 policy-pap | linger.ms = 0 14:29:26 policy-pap | max.block.ms = 60000 14:29:26 policy-pap | max.in.flight.requests.per.connection = 5 14:29:26 policy-pap | max.request.size = 1048576 14:29:26 policy-pap | metadata.max.age.ms = 300000 14:29:26 policy-pap | metadata.max.idle.ms = 300000 14:29:26 policy-pap | metric.reporters = [] 14:29:26 policy-pap | metrics.num.samples = 2 14:29:26 policy-pap | metrics.recording.level = INFO 14:29:26 policy-pap | metrics.sample.window.ms = 30000 14:29:26 policy-pap | partitioner.adaptive.partitioning.enable = true 14:29:26 policy-pap | partitioner.availability.timeout.ms = 0 14:29:26 policy-pap | partitioner.class = null 14:29:26 policy-pap | partitioner.ignore.keys = false 14:29:26 policy-pap | receive.buffer.bytes = 32768 14:29:26 policy-pap | reconnect.backoff.max.ms = 1000 14:29:26 policy-pap | reconnect.backoff.ms = 50 14:29:26 policy-pap | request.timeout.ms = 30000 14:29:26 policy-pap | retries = 2147483647 14:29:26 policy-pap | retry.backoff.ms = 100 14:29:26 policy-pap | sasl.client.callback.handler.class = null 14:29:26 policy-pap | sasl.jaas.config = null 14:29:26 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:29:26 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:29:26 policy-pap | sasl.kerberos.service.name = null 14:29:26 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:29:26 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:29:26 policy-pap | sasl.login.callback.handler.class = null 14:29:26 policy-pap | sasl.login.class = null 14:29:26 policy-pap | sasl.login.connect.timeout.ms = null 14:29:26 policy-pap | sasl.login.read.timeout.ms = null 14:29:26 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:29:26 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:29:26 policy-pap | sasl.login.refresh.window.factor = 0.8 14:29:26 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:29:26 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:29:26 policy-pap | sasl.login.retry.backoff.ms = 100 14:29:26 policy-pap | sasl.mechanism = GSSAPI 14:29:26 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:29:26 policy-pap | sasl.oauthbearer.expected.audience = null 14:29:26 policy-pap | sasl.oauthbearer.expected.issuer = null 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:29:26 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:29:26 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:29:26 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:29:26 policy-pap | security.protocol = PLAINTEXT 14:29:26 policy-pap | security.providers = null 14:29:26 policy-pap | send.buffer.bytes = 131072 14:29:26 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:29:26 policy-pap | socket.connection.setup.timeout.ms = 10000 14:29:26 policy-pap | ssl.cipher.suites = null 14:29:26 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:29:26 policy-pap | ssl.endpoint.identification.algorithm = https 14:29:26 policy-pap | ssl.engine.factory.class = null 14:29:26 policy-pap | ssl.key.password = null 14:29:26 policy-pap | ssl.keymanager.algorithm = SunX509 14:29:26 policy-pap | ssl.keystore.certificate.chain = null 14:29:26 policy-pap | ssl.keystore.key = null 14:29:26 policy-pap | ssl.keystore.location = null 14:29:26 policy-pap | ssl.keystore.password = null 14:29:26 policy-pap | ssl.keystore.type = JKS 14:29:26 policy-pap | ssl.protocol = TLSv1.3 14:29:26 policy-pap | ssl.provider = null 14:29:26 policy-pap | ssl.secure.random.implementation = null 14:29:26 policy-pap | ssl.trustmanager.algorithm = PKIX 14:29:26 policy-pap | ssl.truststore.certificates = null 14:29:26 policy-pap | ssl.truststore.location = null 14:29:26 policy-pap | ssl.truststore.password = null 14:29:26 policy-pap | ssl.truststore.type = JKS 14:29:26 policy-pap | transaction.timeout.ms = 60000 14:29:26 policy-pap | transactional.id = null 14:29:26 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:29:26 policy-pap | 14:29:26 policy-pap | [2024-07-03T14:27:22.830+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 14:29:26 policy-pap | [2024-07-03T14:27:22.849+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:29:26 policy-pap | [2024-07-03T14:27:22.849+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:29:26 policy-pap | [2024-07-03T14:27:22.849+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016842849 14:29:26 policy-pap | [2024-07-03T14:27:22.849+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6a1e44dd-949d-4040-bea9-ddb1f234f1b4, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 14:29:26 policy-pap | [2024-07-03T14:27:22.850+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=32d97e7c-9ca6-43ed-9977-c911814efa02, alive=false, publisher=null]]: starting 14:29:26 policy-pap | [2024-07-03T14:27:22.850+00:00|INFO|ProducerConfig|main] ProducerConfig values: 14:29:26 policy-pap | acks = -1 14:29:26 policy-pap | auto.include.jmx.reporter = true 14:29:26 policy-pap | batch.size = 16384 14:29:26 policy-pap | bootstrap.servers = [kafka:9092] 14:29:26 policy-pap | buffer.memory = 33554432 14:29:26 policy-pap | client.dns.lookup = use_all_dns_ips 14:29:26 policy-pap | client.id = producer-2 14:29:26 policy-pap | compression.type = none 14:29:26 policy-pap | connections.max.idle.ms = 540000 14:29:26 policy-pap | delivery.timeout.ms = 120000 14:29:26 policy-pap | enable.idempotence = true 14:29:26 policy-pap | interceptor.classes = [] 14:29:26 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:29:26 policy-pap | linger.ms = 0 14:29:26 policy-pap | max.block.ms = 60000 14:29:26 policy-pap | max.in.flight.requests.per.connection = 5 14:29:26 policy-pap | max.request.size = 1048576 14:29:26 policy-pap | metadata.max.age.ms = 300000 14:29:26 policy-pap | metadata.max.idle.ms = 300000 14:29:26 policy-pap | metric.reporters = [] 14:29:26 policy-pap | metrics.num.samples = 2 14:29:26 policy-pap | metrics.recording.level = INFO 14:29:26 policy-pap | metrics.sample.window.ms = 30000 14:29:26 policy-pap | partitioner.adaptive.partitioning.enable = true 14:29:26 policy-pap | partitioner.availability.timeout.ms = 0 14:29:26 policy-pap | partitioner.class = null 14:29:26 policy-pap | partitioner.ignore.keys = false 14:29:26 policy-pap | receive.buffer.bytes = 32768 14:29:26 policy-pap | reconnect.backoff.max.ms = 1000 14:29:26 policy-pap | reconnect.backoff.ms = 50 14:29:26 policy-pap | request.timeout.ms = 30000 14:29:26 policy-pap | retries = 2147483647 14:29:26 policy-pap | retry.backoff.ms = 100 14:29:26 policy-pap | sasl.client.callback.handler.class = null 14:29:26 policy-pap | sasl.jaas.config = null 14:29:26 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:29:26 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:29:26 policy-pap | sasl.kerberos.service.name = null 14:29:26 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:29:26 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:29:26 policy-pap | sasl.login.callback.handler.class = null 14:29:26 policy-pap | sasl.login.class = null 14:29:26 policy-pap | sasl.login.connect.timeout.ms = null 14:29:26 policy-pap | sasl.login.read.timeout.ms = null 14:29:26 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:29:26 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:29:26 policy-pap | sasl.login.refresh.window.factor = 0.8 14:29:26 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:29:26 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:29:26 policy-pap | sasl.login.retry.backoff.ms = 100 14:29:26 policy-pap | sasl.mechanism = GSSAPI 14:29:26 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:29:26 policy-pap | sasl.oauthbearer.expected.audience = null 14:29:26 policy-pap | sasl.oauthbearer.expected.issuer = null 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:29:26 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:29:26 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:29:26 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:29:26 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:29:26 policy-pap | security.protocol = PLAINTEXT 14:29:26 policy-pap | security.providers = null 14:29:26 policy-pap | send.buffer.bytes = 131072 14:29:26 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:29:26 policy-pap | socket.connection.setup.timeout.ms = 10000 14:29:26 policy-pap | ssl.cipher.suites = null 14:29:26 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:29:26 policy-pap | ssl.endpoint.identification.algorithm = https 14:29:26 policy-pap | ssl.engine.factory.class = null 14:29:26 policy-pap | ssl.key.password = null 14:29:26 policy-pap | ssl.keymanager.algorithm = SunX509 14:29:26 policy-pap | ssl.keystore.certificate.chain = null 14:29:26 policy-pap | ssl.keystore.key = null 14:29:26 policy-pap | ssl.keystore.location = null 14:29:26 policy-pap | ssl.keystore.password = null 14:29:26 policy-pap | ssl.keystore.type = JKS 14:29:26 policy-pap | ssl.protocol = TLSv1.3 14:29:26 policy-pap | ssl.provider = null 14:29:26 policy-pap | ssl.secure.random.implementation = null 14:29:26 policy-pap | ssl.trustmanager.algorithm = PKIX 14:29:26 policy-pap | ssl.truststore.certificates = null 14:29:26 policy-pap | ssl.truststore.location = null 14:29:26 policy-pap | ssl.truststore.password = null 14:29:26 policy-pap | ssl.truststore.type = JKS 14:29:26 policy-pap | transaction.timeout.ms = 60000 14:29:26 policy-pap | transactional.id = null 14:29:26 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:29:26 policy-pap | 14:29:26 policy-pap | [2024-07-03T14:27:22.851+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 14:29:26 policy-pap | [2024-07-03T14:27:22.853+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:29:26 policy-pap | [2024-07-03T14:27:22.854+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:29:26 policy-pap | [2024-07-03T14:27:22.854+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1720016842853 14:29:26 policy-pap | [2024-07-03T14:27:22.854+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=32d97e7c-9ca6-43ed-9977-c911814efa02, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 14:29:26 policy-pap | [2024-07-03T14:27:22.854+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 14:29:26 policy-pap | [2024-07-03T14:27:22.854+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 14:29:26 policy-pap | [2024-07-03T14:27:22.859+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 14:29:26 policy-pap | [2024-07-03T14:27:22.860+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 14:29:26 policy-pap | [2024-07-03T14:27:22.862+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 14:29:26 policy-pap | [2024-07-03T14:27:22.862+00:00|INFO|TimerManager|Thread-9] timer manager update started 14:29:26 policy-pap | [2024-07-03T14:27:22.864+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 14:29:26 policy-pap | [2024-07-03T14:27:22.864+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 14:29:26 policy-pap | [2024-07-03T14:27:22.866+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 14:29:26 policy-pap | [2024-07-03T14:27:22.866+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 14:29:26 policy-pap | [2024-07-03T14:27:22.867+00:00|INFO|ServiceManager|main] Policy PAP started 14:29:26 policy-pap | [2024-07-03T14:27:22.868+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.554 seconds (process running for 10.136) 14:29:26 policy-pap | [2024-07-03T14:27:23.262+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 14:29:26 policy-pap | [2024-07-03T14:27:23.263+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: oH0-pHXbT_qkvD-L4l6e0Q 14:29:26 policy-pap | [2024-07-03T14:27:23.263+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: oH0-pHXbT_qkvD-L4l6e0Q 14:29:26 policy-pap | [2024-07-03T14:27:23.266+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: oH0-pHXbT_qkvD-L4l6e0Q 14:29:26 policy-pap | [2024-07-03T14:27:23.356+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:29:26 policy-pap | [2024-07-03T14:27:23.356+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Cluster ID: oH0-pHXbT_qkvD-L4l6e0Q 14:29:26 policy-pap | [2024-07-03T14:27:23.384+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:29:26 policy-pap | [2024-07-03T14:27:23.397+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 14:29:26 policy-pap | [2024-07-03T14:27:23.399+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 14:29:26 policy-pap | [2024-07-03T14:27:23.482+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:29:26 policy-pap | [2024-07-03T14:27:23.527+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:29:26 policy-pap | [2024-07-03T14:27:23.591+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:29:26 policy-pap | [2024-07-03T14:27:23.643+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:29:26 policy-pap | [2024-07-03T14:27:23.701+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:29:26 policy-pap | [2024-07-03T14:27:23.749+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:29:26 policy-pap | [2024-07-03T14:27:23.807+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:29:26 policy-pap | [2024-07-03T14:27:23.855+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:29:26 policy-pap | [2024-07-03T14:27:23.914+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:29:26 policy-pap | [2024-07-03T14:27:23.966+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:29:26 policy-pap | [2024-07-03T14:27:24.019+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:29:26 policy-pap | [2024-07-03T14:27:24.073+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:29:26 policy-pap | [2024-07-03T14:27:24.129+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:29:26 policy-pap | [2024-07-03T14:27:24.184+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:29:26 policy-pap | [2024-07-03T14:27:24.244+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:29:26 policy-pap | [2024-07-03T14:27:24.297+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 14:29:26 policy-pap | [2024-07-03T14:27:24.303+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 14:29:26 policy-pap | [2024-07-03T14:27:24.337+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-80d763bd-86c7-4846-b555-35a51ccdc9fc 14:29:26 policy-pap | [2024-07-03T14:27:24.337+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 14:29:26 policy-pap | [2024-07-03T14:27:24.337+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 14:29:26 policy-pap | [2024-07-03T14:27:24.348+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 14:29:26 policy-pap | [2024-07-03T14:27:24.350+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] (Re-)joining group 14:29:26 policy-pap | [2024-07-03T14:27:24.353+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Request joining group due to: need to re-join with the given member-id: consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3-96ab80bf-0485-4af4-8e22-f388a6176ea5 14:29:26 policy-pap | [2024-07-03T14:27:24.353+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 14:29:26 policy-pap | [2024-07-03T14:27:24.353+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] (Re-)joining group 14:29:26 policy-pap | [2024-07-03T14:27:27.361+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-80d763bd-86c7-4846-b555-35a51ccdc9fc', protocol='range'} 14:29:26 policy-pap | [2024-07-03T14:27:27.363+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Successfully joined group with generation Generation{generationId=1, memberId='consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3-96ab80bf-0485-4af4-8e22-f388a6176ea5', protocol='range'} 14:29:26 policy-pap | [2024-07-03T14:27:27.373+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-80d763bd-86c7-4846-b555-35a51ccdc9fc=Assignment(partitions=[policy-pdp-pap-0])} 14:29:26 policy-pap | [2024-07-03T14:27:27.373+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Finished assignment for group at generation 1: {consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3-96ab80bf-0485-4af4-8e22-f388a6176ea5=Assignment(partitions=[policy-pdp-pap-0])} 14:29:26 policy-pap | [2024-07-03T14:27:27.403+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Successfully synced group in generation Generation{generationId=1, memberId='consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3-96ab80bf-0485-4af4-8e22-f388a6176ea5', protocol='range'} 14:29:26 policy-pap | [2024-07-03T14:27:27.404+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 14:29:26 policy-pap | [2024-07-03T14:27:27.407+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-80d763bd-86c7-4846-b555-35a51ccdc9fc', protocol='range'} 14:29:26 policy-pap | [2024-07-03T14:27:27.407+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Adding newly assigned partitions: policy-pdp-pap-0 14:29:26 policy-pap | [2024-07-03T14:27:27.407+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 14:29:26 policy-pap | [2024-07-03T14:27:27.407+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 14:29:26 policy-pap | [2024-07-03T14:27:27.430+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Found no committed offset for partition policy-pdp-pap-0 14:29:26 policy-pap | [2024-07-03T14:27:27.430+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 14:29:26 policy-pap | [2024-07-03T14:27:27.447+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-98eef03c-6c97-41d2-b0d1-0e3fd148d393-3, groupId=98eef03c-6c97-41d2-b0d1-0e3fd148d393] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 14:29:26 policy-pap | [2024-07-03T14:27:27.447+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 14:29:26 policy-pap | [2024-07-03T14:27:41.592+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' 14:29:26 policy-pap | [2024-07-03T14:27:41.592+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' 14:29:26 policy-pap | [2024-07-03T14:27:41.595+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 3 ms 14:29:26 policy-pap | [2024-07-03T14:27:44.187+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 14:29:26 policy-pap | [] 14:29:26 policy-pap | [2024-07-03T14:27:44.188+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:29:26 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"1b57ba60-4d67-444e-93a3-91747d2da0e8","timestampMs":1720016864154,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} 14:29:26 policy-pap | [2024-07-03T14:27:44.188+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:29:26 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"1b57ba60-4d67-444e-93a3-91747d2da0e8","timestampMs":1720016864154,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} 14:29:26 policy-pap | [2024-07-03T14:27:44.195+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 14:29:26 policy-pap | [2024-07-03T14:27:44.284+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate starting 14:29:26 policy-pap | [2024-07-03T14:27:44.284+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate starting listener 14:29:26 policy-pap | [2024-07-03T14:27:44.284+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate starting timer 14:29:26 policy-pap | [2024-07-03T14:27:44.285+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=99adbbf5-9c0b-4530-9ab3-1c88b54e568b, expireMs=1720016894285] 14:29:26 policy-pap | [2024-07-03T14:27:44.287+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate starting enqueue 14:29:26 policy-pap | [2024-07-03T14:27:44.287+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=99adbbf5-9c0b-4530-9ab3-1c88b54e568b, expireMs=1720016894285] 14:29:26 policy-pap | [2024-07-03T14:27:44.287+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate started 14:29:26 policy-pap | [2024-07-03T14:27:44.293+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 14:29:26 policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"99adbbf5-9c0b-4530-9ab3-1c88b54e568b","timestampMs":1720016864262,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-pap | [2024-07-03T14:27:44.332+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:29:26 policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"99adbbf5-9c0b-4530-9ab3-1c88b54e568b","timestampMs":1720016864262,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-pap | [2024-07-03T14:27:44.332+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 14:29:26 policy-pap | [2024-07-03T14:27:44.333+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:29:26 policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"99adbbf5-9c0b-4530-9ab3-1c88b54e568b","timestampMs":1720016864262,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-pap | [2024-07-03T14:27:44.333+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 14:29:26 policy-pap | [2024-07-03T14:27:44.361+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:29:26 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"846177ab-c494-4a97-a796-e73ca11e4459","timestampMs":1720016864344,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} 14:29:26 policy-pap | [2024-07-03T14:27:44.362+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 14:29:26 policy-pap | [2024-07-03T14:27:44.363+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:29:26 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"846177ab-c494-4a97-a796-e73ca11e4459","timestampMs":1720016864344,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup"} 14:29:26 policy-pap | [2024-07-03T14:27:44.366+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:29:26 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"99adbbf5-9c0b-4530-9ab3-1c88b54e568b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"af09b611-5e53-47d2-baaa-12d8df2d805b","timestampMs":1720016864348,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-pap | [2024-07-03T14:27:44.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopping 14:29:26 policy-pap | [2024-07-03T14:27:44.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopping enqueue 14:29:26 policy-pap | [2024-07-03T14:27:44.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopping timer 14:29:26 policy-pap | [2024-07-03T14:27:44.383+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=99adbbf5-9c0b-4530-9ab3-1c88b54e568b, expireMs=1720016894285] 14:29:26 policy-pap | [2024-07-03T14:27:44.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopping listener 14:29:26 policy-pap | [2024-07-03T14:27:44.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopped 14:29:26 policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate successful 14:29:26 policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e start publishing next request 14:29:26 policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange starting 14:29:26 policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange starting listener 14:29:26 policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange starting timer 14:29:26 policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=eaa7f9bc-5e08-4e39-9676-3218fb3ee976, expireMs=1720016894386] 14:29:26 policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange starting enqueue 14:29:26 policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange started 14:29:26 policy-pap | [2024-07-03T14:27:44.386+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=eaa7f9bc-5e08-4e39-9676-3218fb3ee976, expireMs=1720016894386] 14:29:26 policy-pap | [2024-07-03T14:27:44.387+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 14:29:26 policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"eaa7f9bc-5e08-4e39-9676-3218fb3ee976","timestampMs":1720016864263,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-pap | [2024-07-03T14:27:44.392+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:29:26 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"99adbbf5-9c0b-4530-9ab3-1c88b54e568b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"af09b611-5e53-47d2-baaa-12d8df2d805b","timestampMs":1720016864348,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-pap | [2024-07-03T14:27:44.397+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 99adbbf5-9c0b-4530-9ab3-1c88b54e568b 14:29:26 policy-pap | [2024-07-03T14:27:44.403+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:29:26 policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"eaa7f9bc-5e08-4e39-9676-3218fb3ee976","timestampMs":1720016864263,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-pap | [2024-07-03T14:27:44.404+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 14:29:26 policy-pap | [2024-07-03T14:27:44.409+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:29:26 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"eaa7f9bc-5e08-4e39-9676-3218fb3ee976","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"04f8172c-34b6-4a87-af94-498da42764fc","timestampMs":1720016864398,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-pap | [2024-07-03T14:27:44.409+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id eaa7f9bc-5e08-4e39-9676-3218fb3ee976 14:29:26 policy-pap | [2024-07-03T14:27:44.415+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:29:26 policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"eaa7f9bc-5e08-4e39-9676-3218fb3ee976","timestampMs":1720016864263,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-pap | [2024-07-03T14:27:44.416+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 14:29:26 policy-pap | [2024-07-03T14:27:44.419+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:29:26 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"eaa7f9bc-5e08-4e39-9676-3218fb3ee976","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"04f8172c-34b6-4a87-af94-498da42764fc","timestampMs":1720016864398,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-pap | [2024-07-03T14:27:44.420+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange stopping 14:29:26 policy-pap | [2024-07-03T14:27:44.420+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange stopping enqueue 14:29:26 policy-pap | [2024-07-03T14:27:44.420+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange stopping timer 14:29:26 policy-pap | [2024-07-03T14:27:44.420+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=eaa7f9bc-5e08-4e39-9676-3218fb3ee976, expireMs=1720016894386] 14:29:26 policy-pap | [2024-07-03T14:27:44.420+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange stopping listener 14:29:26 policy-pap | [2024-07-03T14:27:44.420+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange stopped 14:29:26 policy-pap | [2024-07-03T14:27:44.421+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpStateChange successful 14:29:26 policy-pap | [2024-07-03T14:27:44.421+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e start publishing next request 14:29:26 policy-pap | [2024-07-03T14:27:44.421+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate starting 14:29:26 policy-pap | [2024-07-03T14:27:44.421+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate starting listener 14:29:26 policy-pap | [2024-07-03T14:27:44.421+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate starting timer 14:29:26 policy-pap | [2024-07-03T14:27:44.421+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=7cf59cc2-d930-4715-b62d-a6f327b4fadd, expireMs=1720016894421] 14:29:26 policy-pap | [2024-07-03T14:27:44.421+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate starting enqueue 14:29:26 policy-pap | [2024-07-03T14:27:44.421+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate started 14:29:26 policy-pap | [2024-07-03T14:27:44.422+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 14:29:26 policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7cf59cc2-d930-4715-b62d-a6f327b4fadd","timestampMs":1720016864403,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-pap | [2024-07-03T14:27:44.431+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:29:26 policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7cf59cc2-d930-4715-b62d-a6f327b4fadd","timestampMs":1720016864403,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-pap | [2024-07-03T14:27:44.432+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 14:29:26 policy-pap | [2024-07-03T14:27:44.433+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:29:26 policy-pap | {"source":"pap-6e433648-1c0f-4bf3-92e2-2187c184928f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7cf59cc2-d930-4715-b62d-a6f327b4fadd","timestampMs":1720016864403,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-pap | [2024-07-03T14:27:44.433+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 14:29:26 policy-pap | [2024-07-03T14:27:44.442+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:29:26 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7cf59cc2-d930-4715-b62d-a6f327b4fadd","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ace136ca-8f8d-4044-a45c-5215fb17ad51","timestampMs":1720016864434,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-pap | [2024-07-03T14:27:44.442+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 7cf59cc2-d930-4715-b62d-a6f327b4fadd 14:29:26 policy-pap | [2024-07-03T14:27:44.445+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:29:26 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7cf59cc2-d930-4715-b62d-a6f327b4fadd","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ace136ca-8f8d-4044-a45c-5215fb17ad51","timestampMs":1720016864434,"name":"apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:29:26 policy-pap | [2024-07-03T14:27:44.446+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopping 14:29:26 policy-pap | [2024-07-03T14:27:44.446+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopping enqueue 14:29:26 policy-pap | [2024-07-03T14:27:44.446+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopping timer 14:29:26 policy-pap | [2024-07-03T14:27:44.446+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=7cf59cc2-d930-4715-b62d-a6f327b4fadd, expireMs=1720016894421] 14:29:26 policy-pap | [2024-07-03T14:27:44.446+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopping listener 14:29:26 policy-pap | [2024-07-03T14:27:44.446+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate stopped 14:29:26 policy-pap | [2024-07-03T14:27:44.450+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e PdpUpdate successful 14:29:26 policy-pap | [2024-07-03T14:27:44.450+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-50bb3b5e-689c-499b-ade9-ec9876eaeb0e has no more requests 14:29:26 policy-pap | [2024-07-03T14:28:14.286+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=99adbbf5-9c0b-4530-9ab3-1c88b54e568b, expireMs=1720016894285] 14:29:26 policy-pap | [2024-07-03T14:28:14.386+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=eaa7f9bc-5e08-4e39-9676-3218fb3ee976, expireMs=1720016894386] 14:29:26 policy-pap | [2024-07-03T14:29:22.866+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 14:29:26 =================================== 14:29:26 ======== Logs from prometheus ======== 14:29:26 prometheus | ts=2024-07-03T14:26:49.532Z caller=main.go:589 level=info msg="No time or size retention was set so using the default time retention" duration=15d 14:29:26 prometheus | ts=2024-07-03T14:26:49.532Z caller=main.go:633 level=info msg="Starting Prometheus Server" mode=server version="(version=2.53.0, branch=HEAD, revision=4c35b9250afefede41c5f5acd76191f90f625898)" 14:29:26 prometheus | ts=2024-07-03T14:26:49.532Z caller=main.go:638 level=info build_context="(go=go1.22.4, platform=linux/amd64, user=root@7f8d89cbbd64, date=20240619-07:39:12, tags=netgo,builtinassets,stringlabels)" 14:29:26 prometheus | ts=2024-07-03T14:26:49.532Z caller=main.go:639 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 14:29:26 prometheus | ts=2024-07-03T14:26:49.532Z caller=main.go:640 level=info fd_limits="(soft=1048576, hard=1048576)" 14:29:26 prometheus | ts=2024-07-03T14:26:49.532Z caller=main.go:641 level=info vm_limits="(soft=unlimited, hard=unlimited)" 14:29:26 prometheus | ts=2024-07-03T14:26:49.540Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 14:29:26 prometheus | ts=2024-07-03T14:26:49.541Z caller=main.go:1148 level=info msg="Starting TSDB ..." 14:29:26 prometheus | ts=2024-07-03T14:26:49.542Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 14:29:26 prometheus | ts=2024-07-03T14:26:49.542Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 14:29:26 prometheus | ts=2024-07-03T14:26:49.545Z caller=head.go:626 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 14:29:26 prometheus | ts=2024-07-03T14:26:49.545Z caller=head.go:713 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.41µs 14:29:26 prometheus | ts=2024-07-03T14:26:49.545Z caller=head.go:721 level=info component=tsdb msg="Replaying WAL, this may take a while" 14:29:26 prometheus | ts=2024-07-03T14:26:49.546Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 14:29:26 prometheus | ts=2024-07-03T14:26:49.546Z caller=head.go:830 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=80.241µs wal_replay_duration=565.453µs wbl_replay_duration=170ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.41µs total_replay_duration=719.526µs 14:29:26 prometheus | ts=2024-07-03T14:26:49.549Z caller=main.go:1169 level=info fs_type=EXT4_SUPER_MAGIC 14:29:26 prometheus | ts=2024-07-03T14:26:49.549Z caller=main.go:1172 level=info msg="TSDB started" 14:29:26 prometheus | ts=2024-07-03T14:26:49.549Z caller=main.go:1354 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 14:29:26 prometheus | ts=2024-07-03T14:26:49.552Z caller=main.go:1391 level=info msg="updated GOGC" old=100 new=75 14:29:26 prometheus | ts=2024-07-03T14:26:49.553Z caller=main.go:1402 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=3.891122ms db_storage=1.29µs remote_storage=1.92µs web_handler=1.7µs query_engine=1.28µs scrape=330.147µs scrape_sd=203.365µs notify=37.86µs notify_sd=43.201µs rules=2.07µs tracing=7.781µs 14:29:26 prometheus | ts=2024-07-03T14:26:49.553Z caller=main.go:1133 level=info msg="Server is ready to receive web requests." 14:29:26 prometheus | ts=2024-07-03T14:26:49.553Z caller=manager.go:164 level=info component="rule manager" msg="Starting rule manager..." 14:29:26 =================================== 14:29:26 ======== Logs from simulator ======== 14:29:26 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 14:29:26 simulator | overriding logback.xml 14:29:26 simulator | 2024-07-03 14:26:51,112 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 14:29:26 simulator | 2024-07-03 14:26:51,182 INFO org.onap.policy.models.simulators starting 14:29:26 simulator | 2024-07-03 14:26:51,183 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 14:29:26 simulator | 2024-07-03 14:26:51,410 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 14:29:26 simulator | 2024-07-03 14:26:51,411 INFO org.onap.policy.models.simulators starting A&AI simulator 14:29:26 simulator | 2024-07-03 14:26:51,543 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 14:29:26 simulator | 2024-07-03 14:26:51,556 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:29:26 simulator | 2024-07-03 14:26:51,559 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:29:26 simulator | 2024-07-03 14:26:51,567 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 14:29:26 simulator | 2024-07-03 14:26:51,646 INFO Session workerName=node0 14:29:26 simulator | 2024-07-03 14:26:52,195 INFO Using GSON for REST calls 14:29:26 simulator | 2024-07-03 14:26:52,284 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} 14:29:26 simulator | 2024-07-03 14:26:52,293 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 14:29:26 simulator | 2024-07-03 14:26:52,309 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1723ms 14:29:26 simulator | 2024-07-03 14:26:52,309 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4249 ms. 14:29:26 simulator | 2024-07-03 14:26:52,316 INFO org.onap.policy.models.simulators starting SDNC simulator 14:29:26 simulator | 2024-07-03 14:26:52,319 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 14:29:26 simulator | 2024-07-03 14:26:52,319 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:29:26 simulator | 2024-07-03 14:26:52,320 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:29:26 simulator | 2024-07-03 14:26:52,321 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 14:29:26 simulator | 2024-07-03 14:26:52,341 INFO Session workerName=node0 14:29:26 simulator | 2024-07-03 14:26:52,445 INFO Using GSON for REST calls 14:29:26 simulator | 2024-07-03 14:26:52,458 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} 14:29:26 simulator | 2024-07-03 14:26:52,460 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 14:29:26 simulator | 2024-07-03 14:26:52,460 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1875ms 14:29:26 simulator | 2024-07-03 14:26:52,460 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4859 ms. 14:29:26 simulator | 2024-07-03 14:26:52,461 INFO org.onap.policy.models.simulators starting SO simulator 14:29:26 simulator | 2024-07-03 14:26:52,464 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 14:29:26 simulator | 2024-07-03 14:26:52,464 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:29:26 simulator | 2024-07-03 14:26:52,465 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:29:26 simulator | 2024-07-03 14:26:52,466 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 14:29:26 simulator | 2024-07-03 14:26:52,472 INFO Session workerName=node0 14:29:26 simulator | 2024-07-03 14:26:52,528 INFO Using GSON for REST calls 14:29:26 simulator | 2024-07-03 14:26:52,541 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} 14:29:26 simulator | 2024-07-03 14:26:52,543 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 14:29:26 simulator | 2024-07-03 14:26:52,544 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1959ms 14:29:26 simulator | 2024-07-03 14:26:52,544 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4921 ms. 14:29:26 simulator | 2024-07-03 14:26:52,545 INFO org.onap.policy.models.simulators starting VFC simulator 14:29:26 simulator | 2024-07-03 14:26:52,549 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 14:29:26 simulator | 2024-07-03 14:26:52,549 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:29:26 simulator | 2024-07-03 14:26:52,550 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:29:26 simulator | 2024-07-03 14:26:52,551 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 14:29:26 simulator | 2024-07-03 14:26:52,554 INFO Session workerName=node0 14:29:26 simulator | 2024-07-03 14:26:52,601 INFO Using GSON for REST calls 14:29:26 simulator | 2024-07-03 14:26:52,611 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} 14:29:26 simulator | 2024-07-03 14:26:52,612 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 14:29:26 simulator | 2024-07-03 14:26:52,612 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @2027ms 14:29:26 simulator | 2024-07-03 14:26:52,612 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4938 ms. 14:29:26 simulator | 2024-07-03 14:26:52,613 INFO org.onap.policy.models.simulators started 14:29:26 =================================== 14:29:26 ======== Logs from zookeeper ======== 14:29:26 zookeeper | ===> User 14:29:26 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 14:29:26 zookeeper | ===> Configuring ... 14:29:26 zookeeper | ===> Running preflight checks ... 14:29:26 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 14:29:26 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 14:29:26 zookeeper | ===> Launching ... 14:29:26 zookeeper | ===> Launching zookeeper ... 14:29:26 zookeeper | [2024-07-03 14:26:52,164] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:29:26 zookeeper | [2024-07-03 14:26:52,175] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:29:26 zookeeper | [2024-07-03 14:26:52,175] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:29:26 zookeeper | [2024-07-03 14:26:52,175] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:29:26 zookeeper | [2024-07-03 14:26:52,175] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:29:26 zookeeper | [2024-07-03 14:26:52,177] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 14:29:26 zookeeper | [2024-07-03 14:26:52,177] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 14:29:26 zookeeper | [2024-07-03 14:26:52,177] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 14:29:26 zookeeper | [2024-07-03 14:26:52,177] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 14:29:26 zookeeper | [2024-07-03 14:26:52,179] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 14:29:26 zookeeper | [2024-07-03 14:26:52,180] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:29:26 zookeeper | [2024-07-03 14:26:52,180] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:29:26 zookeeper | [2024-07-03 14:26:52,180] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:29:26 zookeeper | [2024-07-03 14:26:52,180] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:29:26 zookeeper | [2024-07-03 14:26:52,180] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:29:26 zookeeper | [2024-07-03 14:26:52,180] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 14:29:26 zookeeper | [2024-07-03 14:26:52,199] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@77eca502 (org.apache.zookeeper.server.ServerMetrics) 14:29:26 zookeeper | [2024-07-03 14:26:52,202] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 14:29:26 zookeeper | [2024-07-03 14:26:52,202] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 14:29:26 zookeeper | [2024-07-03 14:26:52,206] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 14:29:26 zookeeper | [2024-07-03 14:26:52,217] INFO (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,217] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,217] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,217] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,217] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,217] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,217] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,217] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,217] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,217] INFO (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,218] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,218] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,219] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,221] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 14:29:26 zookeeper | [2024-07-03 14:26:52,222] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,222] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,223] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 14:29:26 zookeeper | [2024-07-03 14:26:52,223] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 14:29:26 zookeeper | [2024-07-03 14:26:52,224] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:29:26 zookeeper | [2024-07-03 14:26:52,224] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:29:26 zookeeper | [2024-07-03 14:26:52,224] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:29:26 zookeeper | [2024-07-03 14:26:52,224] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:29:26 zookeeper | [2024-07-03 14:26:52,224] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:29:26 zookeeper | [2024-07-03 14:26:52,224] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:29:26 zookeeper | [2024-07-03 14:26:52,226] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,227] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,227] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 14:29:26 zookeeper | [2024-07-03 14:26:52,227] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 14:29:26 zookeeper | [2024-07-03 14:26:52,227] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,248] INFO Logging initialized @615ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 14:29:26 zookeeper | [2024-07-03 14:26:52,355] WARN o.e.j.s.ServletContextHandler@6d5620ce{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 14:29:26 zookeeper | [2024-07-03 14:26:52,355] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 14:29:26 zookeeper | [2024-07-03 14:26:52,375] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) 14:29:26 zookeeper | [2024-07-03 14:26:52,409] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 14:29:26 zookeeper | [2024-07-03 14:26:52,409] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 14:29:26 zookeeper | [2024-07-03 14:26:52,410] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 14:29:26 zookeeper | [2024-07-03 14:26:52,414] WARN ServletContext@o.e.j.s.ServletContextHandler@6d5620ce{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 14:29:26 zookeeper | [2024-07-03 14:26:52,423] INFO Started o.e.j.s.ServletContextHandler@6d5620ce{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 14:29:26 zookeeper | [2024-07-03 14:26:52,442] INFO Started ServerConnector@4d1bf319{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 14:29:26 zookeeper | [2024-07-03 14:26:52,443] INFO Started @810ms (org.eclipse.jetty.server.Server) 14:29:26 zookeeper | [2024-07-03 14:26:52,443] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,452] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 14:29:26 zookeeper | [2024-07-03 14:26:52,453] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 14:29:26 zookeeper | [2024-07-03 14:26:52,456] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 14:29:26 zookeeper | [2024-07-03 14:26:52,457] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 14:29:26 zookeeper | [2024-07-03 14:26:52,481] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 14:29:26 zookeeper | [2024-07-03 14:26:52,481] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 14:29:26 zookeeper | [2024-07-03 14:26:52,483] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 14:29:26 zookeeper | [2024-07-03 14:26:52,483] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 14:29:26 zookeeper | [2024-07-03 14:26:52,489] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 14:29:26 zookeeper | [2024-07-03 14:26:52,489] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 14:29:26 zookeeper | [2024-07-03 14:26:52,492] INFO Snapshot loaded in 9 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 14:29:26 zookeeper | [2024-07-03 14:26:52,493] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 14:29:26 zookeeper | [2024-07-03 14:26:52,494] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:29:26 zookeeper | [2024-07-03 14:26:52,512] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 14:29:26 zookeeper | [2024-07-03 14:26:52,511] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 14:29:26 zookeeper | [2024-07-03 14:26:52,535] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 14:29:26 zookeeper | [2024-07-03 14:26:52,536] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 14:29:26 zookeeper | [2024-07-03 14:26:57,361] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 14:29:26 =================================== 14:29:26 Tearing down containers... 14:29:26 time="2024-07-03T14:29:26Z" level=warning msg="The \"TEST_ENV\" variable is not set. Defaulting to a blank string." 14:29:26 Container policy-apex-pdp Stopping 14:29:26 Container grafana Stopping 14:29:26 Container policy-csit Stopping 14:29:26 Container policy-csit Stopped 14:29:26 Container policy-csit Removing 14:29:26 Container policy-csit Removed 14:29:26 Container grafana Stopped 14:29:26 Container grafana Removing 14:29:26 Container grafana Removed 14:29:26 Container prometheus Stopping 14:29:27 Container prometheus Stopped 14:29:27 Container prometheus Removing 14:29:27 Container prometheus Removed 14:29:36 Container policy-apex-pdp Stopped 14:29:36 Container policy-apex-pdp Removing 14:29:36 Container policy-apex-pdp Removed 14:29:36 Container simulator Stopping 14:29:36 Container policy-pap Stopping 14:29:47 Container simulator Stopped 14:29:47 Container simulator Removing 14:29:47 Container policy-pap Stopped 14:29:47 Container policy-pap Removing 14:29:47 Container simulator Removed 14:29:47 Container policy-pap Removed 14:29:47 Container kafka Stopping 14:29:47 Container policy-api Stopping 14:29:48 Container kafka Stopped 14:29:48 Container kafka Removing 14:29:48 Container kafka Removed 14:29:48 Container zookeeper Stopping 14:29:48 Container zookeeper Stopped 14:29:48 Container zookeeper Removing 14:29:48 Container zookeeper Removed 14:29:57 Container policy-api Stopped 14:29:57 Container policy-api Removing 14:29:57 Container policy-api Removed 14:29:57 Container policy-db-migrator Stopping 14:29:57 Container policy-db-migrator Stopped 14:29:57 Container policy-db-migrator Removing 14:29:57 Container policy-db-migrator Removed 14:29:57 Container mariadb Stopping 14:29:58 Container mariadb Stopped 14:29:58 Container mariadb Removing 14:29:58 Container mariadb Removed 14:29:58 Network compose_default Removing 14:29:58 Network compose_default Removed 14:29:58 $ ssh-agent -k 14:29:58 unset SSH_AUTH_SOCK; 14:29:58 unset SSH_AGENT_PID; 14:29:58 echo Agent pid 2297 killed; 14:29:58 [ssh-agent] Stopped. 14:29:58 Robot results publisher started... 14:29:58 INFO: Checking test criticality is deprecated and will be dropped in a future release! 14:29:58 -Parsing output xml: 14:29:59 Done! 14:29:59 -Copying log files to build dir: 14:29:59 Done! 14:29:59 -Assigning results to build: 14:29:59 Done! 14:29:59 -Checking thresholds: 14:29:59 Done! 14:29:59 Done publishing Robot results. 14:29:59 Build step 'Publish Robot Framework test results' changed build result to UNSTABLE 14:29:59 [PostBuildScript] - [INFO] Executing post build scripts. 14:29:59 [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins6961592831477526347.sh 14:29:59 ---> sysstat.sh 14:29:59 [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins12617291833414988149.sh 14:29:59 ---> package-listing.sh 14:29:59 ++ facter osfamily 14:29:59 ++ tr '[:upper:]' '[:lower:]' 14:29:59 + OS_FAMILY=debian 14:29:59 + workspace=/w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp 14:29:59 + START_PACKAGES=/tmp/packages_start.txt 14:29:59 + END_PACKAGES=/tmp/packages_end.txt 14:29:59 + DIFF_PACKAGES=/tmp/packages_diff.txt 14:29:59 + PACKAGES=/tmp/packages_start.txt 14:29:59 + '[' /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp ']' 14:29:59 + PACKAGES=/tmp/packages_end.txt 14:29:59 + case "${OS_FAMILY}" in 14:29:59 + dpkg -l 14:29:59 + grep '^ii' 14:29:59 + '[' -f /tmp/packages_start.txt ']' 14:29:59 + '[' -f /tmp/packages_end.txt ']' 14:29:59 + diff /tmp/packages_start.txt /tmp/packages_end.txt 14:30:00 + '[' /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp ']' 14:30:00 + mkdir -p /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp/archives/ 14:30:00 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp/archives/ 14:30:00 [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins7113998865688811890.sh 14:30:00 ---> capture-instance-metadata.sh 14:30:00 Setup pyenv: 14:30:00 system 14:30:00 3.8.13 14:30:00 3.9.13 14:30:00 * 3.10.6 (set by /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp/.python-version) 14:30:00 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-FKTr from file:/tmp/.os_lf_venv 14:30:01 lf-activate-venv(): INFO: Installing: lftools 14:30:08 lf-activate-venv(): INFO: Adding /tmp/venv-FKTr/bin to PATH 14:30:08 INFO: Running in OpenStack, capturing instance metadata 14:30:09 [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins12446928576871187660.sh 14:30:09 provisioning config files... 14:30:09 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp@tmp/config153692768833332721tmp 14:30:09 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 14:30:09 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 14:30:09 [EnvInject] - Injecting environment variables from a build step. 14:30:09 [EnvInject] - Injecting as environment variables the properties content 14:30:09 SERVER_ID=logs 14:30:09 14:30:09 [EnvInject] - Variables injected successfully. 14:30:09 [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins7118233774203852844.sh 14:30:09 ---> create-netrc.sh 14:30:09 [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins4384182666604703713.sh 14:30:09 ---> python-tools-install.sh 14:30:09 Setup pyenv: 14:30:09 system 14:30:09 3.8.13 14:30:09 3.9.13 14:30:09 * 3.10.6 (set by /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp/.python-version) 14:30:09 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-FKTr from file:/tmp/.os_lf_venv 14:30:10 lf-activate-venv(): INFO: Installing: lftools 14:30:17 lf-activate-venv(): INFO: Adding /tmp/venv-FKTr/bin to PATH 14:30:17 [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins6560270366241382181.sh 14:30:17 ---> sudo-logs.sh 14:30:17 Archiving 'sudo' log.. 14:30:17 [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash /tmp/jenkins3348896855516403847.sh 14:30:17 ---> job-cost.sh 14:30:17 Setup pyenv: 14:30:17 system 14:30:17 3.8.13 14:30:17 3.9.13 14:30:17 * 3.10.6 (set by /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp/.python-version) 14:30:17 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-FKTr from file:/tmp/.os_lf_venv 14:30:18 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 14:30:22 lf-activate-venv(): INFO: Adding /tmp/venv-FKTr/bin to PATH 14:30:22 INFO: No Stack... 14:30:22 INFO: Retrieving Pricing Info for: v3-standard-8 14:30:22 INFO: Archiving Costs 14:30:22 [policy-apex-pdp-master-project-csit-verify-apex-pdp] $ /bin/bash -l /tmp/jenkins3943631210644930786.sh 14:30:22 ---> logs-deploy.sh 14:30:22 Setup pyenv: 14:30:22 system 14:30:22 3.8.13 14:30:22 3.9.13 14:30:22 * 3.10.6 (set by /w/workspace/policy-apex-pdp-master-project-csit-verify-apex-pdp/.python-version) 14:30:23 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-FKTr from file:/tmp/.os_lf_venv 14:30:23 lf-activate-venv(): INFO: Installing: lftools 14:30:31 lf-activate-venv(): INFO: Adding /tmp/venv-FKTr/bin to PATH 14:30:31 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-apex-pdp-master-project-csit-verify-apex-pdp/548 14:30:31 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 14:30:32 Archives upload complete. 14:30:32 INFO: archiving logs to Nexus 14:30:33 ---> uname -a: 14:30:33 Linux prd-ubuntu1804-docker-8c-8g-21085 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 14:30:33 14:30:33 14:30:33 ---> lscpu: 14:30:33 Architecture: x86_64 14:30:33 CPU op-mode(s): 32-bit, 64-bit 14:30:33 Byte Order: Little Endian 14:30:33 CPU(s): 8 14:30:33 On-line CPU(s) list: 0-7 14:30:33 Thread(s) per core: 1 14:30:33 Core(s) per socket: 1 14:30:33 Socket(s): 8 14:30:33 NUMA node(s): 1 14:30:33 Vendor ID: AuthenticAMD 14:30:33 CPU family: 23 14:30:33 Model: 49 14:30:33 Model name: AMD EPYC-Rome Processor 14:30:33 Stepping: 0 14:30:33 CPU MHz: 2799.998 14:30:33 BogoMIPS: 5599.99 14:30:33 Virtualization: AMD-V 14:30:33 Hypervisor vendor: KVM 14:30:33 Virtualization type: full 14:30:33 L1d cache: 32K 14:30:33 L1i cache: 32K 14:30:33 L2 cache: 512K 14:30:33 L3 cache: 16384K 14:30:33 NUMA node0 CPU(s): 0-7 14:30:33 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 14:30:33 14:30:33 14:30:33 ---> nproc: 14:30:33 8 14:30:33 14:30:33 14:30:33 ---> df -h: 14:30:33 Filesystem Size Used Avail Use% Mounted on 14:30:33 udev 16G 0 16G 0% /dev 14:30:33 tmpfs 3.2G 708K 3.2G 1% /run 14:30:33 /dev/vda1 155G 14G 141G 9% / 14:30:33 tmpfs 16G 0 16G 0% /dev/shm 14:30:33 tmpfs 5.0M 0 5.0M 0% /run/lock 14:30:33 tmpfs 16G 0 16G 0% /sys/fs/cgroup 14:30:33 /dev/vda15 105M 4.4M 100M 5% /boot/efi 14:30:33 tmpfs 3.2G 0 3.2G 0% /run/user/1001 14:30:33 14:30:33 14:30:33 ---> free -m: 14:30:33 total used free shared buff/cache available 14:30:33 Mem: 32167 889 24713 0 6564 30822 14:30:33 Swap: 1023 0 1023 14:30:33 14:30:33 14:30:33 ---> ip addr: 14:30:33 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 14:30:33 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 14:30:33 inet 127.0.0.1/8 scope host lo 14:30:33 valid_lft forever preferred_lft forever 14:30:33 inet6 ::1/128 scope host 14:30:33 valid_lft forever preferred_lft forever 14:30:33 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 14:30:33 link/ether fa:16:3e:61:2c:18 brd ff:ff:ff:ff:ff:ff 14:30:33 inet 10.30.106.55/23 brd 10.30.107.255 scope global dynamic ens3 14:30:33 valid_lft 83453sec preferred_lft 83453sec 14:30:33 inet6 fe80::f816:3eff:fe61:2c18/64 scope link 14:30:33 valid_lft forever preferred_lft forever 14:30:33 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 14:30:33 link/ether 02:42:33:71:34:d6 brd ff:ff:ff:ff:ff:ff 14:30:33 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 14:30:33 valid_lft forever preferred_lft forever 14:30:33 inet6 fe80::42:33ff:fe71:34d6/64 scope link 14:30:33 valid_lft forever preferred_lft forever 14:30:33 14:30:33 14:30:33 ---> sar -b -r -n DEV: 14:30:33 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21085) 07/03/24 _x86_64_ (8 CPU) 14:30:33 14:30:33 13:41:27 LINUX RESTART (8 CPU) 14:30:33 14:30:33 13:42:02 tps rtps wtps bread/s bwrtn/s 14:30:33 13:43:01 31.54 13.00 18.54 688.15 17999.80 14:30:33 13:44:01 11.80 0.00 11.80 0.00 16680.82 14:30:33 13:45:01 11.40 0.00 11.40 0.00 16540.35 14:30:33 13:46:01 11.45 0.00 11.45 0.00 16544.44 14:30:33 13:47:01 11.58 0.00 11.58 0.00 16545.64 14:30:33 13:48:01 11.51 0.02 11.50 0.13 16679.49 14:30:33 13:49:01 11.48 0.00 11.48 0.00 16544.58 14:30:33 13:50:01 11.45 0.00 11.45 0.00 16411.40 14:30:33 13:51:01 7.77 0.00 7.77 0.00 10763.81 14:30:33 13:52:01 1.07 0.00 1.07 0.00 13.86 14:30:33 13:53:01 0.93 0.00 0.93 0.00 11.32 14:30:33 13:54:01 0.95 0.00 0.95 0.00 12.80 14:30:33 13:55:01 0.85 0.00 0.85 0.00 10.80 14:30:33 13:56:01 1.05 0.00 1.05 0.00 14.13 14:30:33 13:57:01 5.52 4.07 1.45 32.53 23.86 14:30:33 13:58:01 1.45 0.00 1.45 0.00 19.20 14:30:33 13:59:02 0.92 0.00 0.92 0.00 11.86 14:30:33 14:00:01 1.07 0.00 1.07 0.00 14.51 14:30:33 14:01:01 0.88 0.00 0.88 0.00 11.06 14:30:33 14:02:01 2.50 1.53 0.97 43.19 13.73 14:30:33 14:03:01 1.75 0.53 1.22 13.06 16.93 14:30:33 14:04:01 1.00 0.00 1.00 0.00 13.73 14:30:33 14:05:01 1.05 0.00 1.05 0.00 12.53 14:30:33 14:06:01 0.92 0.00 0.92 0.00 13.06 14:30:33 14:07:01 1.00 0.00 1.00 0.00 12.00 14:30:33 14:08:01 1.00 0.00 1.00 0.00 14.26 14:30:33 14:09:01 1.02 0.00 1.02 0.00 12.53 14:30:33 14:10:01 1.28 0.00 1.28 0.00 16.00 14:30:33 14:11:01 0.80 0.00 0.80 0.00 9.87 14:30:33 14:12:01 0.93 0.00 0.93 0.00 12.13 14:30:33 14:13:01 0.83 0.00 0.83 0.00 9.86 14:30:33 14:14:01 0.73 0.00 0.73 0.00 10.26 14:30:33 14:15:01 1.15 0.00 1.15 0.00 14.40 14:30:33 14:16:01 0.97 0.00 0.97 0.00 12.80 14:30:33 14:17:01 1.02 0.02 1.00 0.13 11.46 14:30:33 14:18:01 1.32 0.00 1.32 0.00 16.80 14:30:33 14:19:01 1.05 0.00 1.05 0.00 13.20 14:30:33 14:20:01 1.10 0.00 1.10 0.00 14.40 14:30:33 14:21:01 0.85 0.00 0.85 0.00 10.66 14:30:33 14:22:01 1.03 0.00 1.03 0.00 13.46 14:30:33 14:23:01 1.07 0.00 1.07 0.00 14.26 14:30:33 14:24:01 1.02 0.00 1.02 0.00 14.40 14:30:33 14:25:01 315.06 37.66 277.40 1765.71 5454.69 14:30:33 14:26:01 178.20 18.75 159.46 2274.29 29905.68 14:30:33 14:27:01 469.29 13.05 456.24 777.37 133191.72 14:30:33 14:28:01 69.81 0.18 69.62 30.79 11895.67 14:30:33 14:29:01 73.84 1.20 72.64 35.99 9115.19 14:30:33 14:30:01 55.37 0.62 54.76 30.93 1247.49 14:30:33 Average: 27.54 1.88 25.66 118.43 6997.79 14:30:33 14:30:33 13:42:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 14:30:33 13:43:01 30642368 31886212 2296852 6.97 38628 1533448 1317216 3.88 632012 1415812 80 14:30:33 13:44:01 30654248 31898212 2284972 6.94 38708 1533452 1317216 3.88 618328 1415784 184 14:30:33 13:45:01 30652252 31896376 2286968 6.94 38796 1533452 1317216 3.88 621400 1415784 16 14:30:33 13:46:01 30647712 31891940 2291508 6.96 38876 1533460 1317216 3.88 626156 1415788 128 14:30:33 13:47:01 30646664 31890984 2292556 6.96 38972 1533456 1333328 3.92 626740 1415796 8 14:30:33 13:48:01 30645208 31889656 2294012 6.96 39060 1533480 1333328 3.92 628224 1415804 64 14:30:33 13:49:01 30638176 31882696 2301044 6.99 39148 1533484 1333328 3.92 634716 1415812 12 14:30:33 13:50:01 30636728 31881380 2302492 6.99 39228 1533488 1333328 3.92 635884 1415808 8 14:30:33 13:51:01 30635124 31879860 2304096 6.99 39300 1533492 1349428 3.97 637308 1415812 148 14:30:33 13:52:01 30634164 31878928 2305056 7.00 39340 1533488 1349428 3.97 638348 1415816 8 14:30:33 13:53:01 30633144 31877976 2306076 7.00 39372 1533500 1349428 3.97 639764 1415820 4 14:30:33 13:54:01 30631656 31876516 2307564 7.01 39404 1533504 1349428 3.97 640700 1415824 8 14:30:33 13:55:01 30630620 31875524 2308600 7.01 39436 1533508 1349428 3.97 642012 1415828 4 14:30:33 13:56:01 30629492 31874420 2309728 7.01 39452 1533516 1349428 3.97 642948 1415836 40 14:30:33 13:57:01 30626304 31872336 2312916 7.02 40480 1533512 1383564 4.07 645032 1415840 176 14:30:33 13:58:01 30625164 31871264 2314056 7.03 40520 1533524 1383564 4.07 646324 1415844 124 14:30:33 13:59:02 30618316 31864456 2320904 7.05 40556 1533524 1383564 4.07 654056 1415844 160 14:30:33 14:00:01 30617288 31863468 2321932 7.05 40596 1533528 1383564 4.07 654952 1415848 12 14:30:33 14:01:01 30616132 31862364 2323088 7.05 40628 1533532 1383564 4.07 656176 1415852 164 14:30:33 14:02:01 30601712 31849448 2337508 7.10 40676 1534956 1383564 4.07 669956 1416880 236 14:30:33 14:03:01 30572336 31821016 2366884 7.19 40732 1535348 1364428 4.01 698200 1416576 20 14:30:33 14:04:01 30571912 31820656 2367308 7.19 40772 1535360 1364428 4.01 698204 1416580 204 14:30:33 14:05:01 30572380 31821176 2366840 7.19 40820 1535364 1364428 4.01 698504 1416584 8 14:30:33 14:06:01 30570700 31819536 2368520 7.19 40852 1535372 1364428 4.01 699604 1416592 208 14:30:33 14:07:01 30570692 31819564 2368528 7.19 40884 1535376 1364428 4.01 699504 1416596 8 14:30:33 14:08:01 30571104 31820008 2368116 7.19 40932 1535364 1364428 4.01 699572 1416600 48 14:30:33 14:09:01 30570856 31819812 2368364 7.19 40956 1535384 1364428 4.01 699660 1416604 40 14:30:33 14:10:01 30570908 31819892 2368312 7.19 40980 1535388 1348172 3.97 699648 1416608 12 14:30:33 14:11:01 30570448 31819456 2368772 7.19 40996 1535392 1348172 3.97 699736 1416612 132 14:30:33 14:12:01 30570440 31819492 2368780 7.19 41012 1535392 1348172 3.97 699872 1416612 152 14:30:33 14:13:01 30570124 31819188 2369096 7.19 41028 1535396 1348172 3.97 699920 1416616 152 14:30:33 14:14:01 30570164 31819280 2369056 7.19 41060 1535404 1348172 3.97 699732 1416624 256 14:30:33 14:15:01 30570248 31819436 2368972 7.19 41100 1535408 1348172 3.97 700112 1416628 156 14:30:33 14:16:01 30570320 31819544 2368900 7.19 41132 1535412 1348172 3.97 700028 1416632 48 14:30:33 14:17:01 30569232 31818528 2369988 7.20 41168 1535412 1348172 3.97 700488 1416636 192 14:30:33 14:18:01 30569280 31818628 2369940 7.19 41200 1535424 1348172 3.97 700160 1416640 176 14:30:33 14:19:01 30568924 31818328 2370296 7.20 41232 1535428 1348172 3.97 700412 1416648 156 14:30:33 14:20:01 30569216 31818584 2370004 7.20 41272 1535432 1348172 3.97 700256 1416652 8 14:30:33 14:21:01 30568904 31818280 2370316 7.20 41304 1535436 1348172 3.97 700344 1416656 132 14:30:33 14:22:01 30569004 31818452 2370216 7.20 41336 1535440 1348172 3.97 700420 1416660 228 14:30:33 14:23:01 30569012 31818496 2370208 7.20 41384 1535436 1348172 3.97 700400 1416664 28 14:30:33 14:24:01 30569056 31818592 2370164 7.20 41416 1535452 1348172 3.97 700456 1416668 224 14:30:33 14:25:01 30147628 31638120 2791592 8.47 60220 1747388 1508580 4.44 939664 1573992 113104 14:30:33 14:26:01 25665172 31576440 7274048 22.08 120520 5949060 2085256 6.14 1070532 5702292 3550700 14:30:33 14:27:01 24807332 30777728 8131888 24.69 145952 5946860 7909160 23.27 2021076 5528276 0 14:30:33 14:28:01 23165520 29526604 9773700 29.67 165332 6281740 9090316 26.75 3388344 5748580 46252 14:30:33 14:29:01 23098884 29497912 9840336 29.87 175700 6304932 9158196 26.95 3435076 5766340 588 14:30:33 14:30:01 25366700 31616692 7572520 22.99 176544 6170484 1546468 4.55 1361972 5634412 2628 14:30:33 Average: 29947062 31712989 2992158 9.08 52896 2017721 1834827 5.40 841728 1863271 77447 14:30:33 14:30:33 13:42:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 14:30:33 13:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:43:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:43:01 ens3 187.49 137.03 523.01 29.73 0.00 0.00 0.00 0.00 14:30:33 13:44:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:44:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 13:44:01 ens3 0.37 0.12 0.07 0.22 0.00 0.00 0.00 0.00 14:30:33 13:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:45:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:45:01 ens3 1.70 0.50 0.48 0.48 0.00 0.00 0.00 0.00 14:30:33 13:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:46:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 13:46:01 ens3 3.17 1.47 2.53 0.84 0.00 0.00 0.00 0.00 14:30:33 13:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:47:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:47:01 ens3 1.17 0.10 0.62 0.01 0.00 0.00 0.00 0.00 14:30:33 13:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:48:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 13:48:01 ens3 1.23 0.27 0.65 0.26 0.00 0.00 0.00 0.00 14:30:33 13:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:49:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:49:01 ens3 1.17 0.42 0.62 0.80 0.00 0.00 0.00 0.00 14:30:33 13:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:50:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 13:50:01 ens3 0.70 0.17 0.23 0.16 0.00 0.00 0.00 0.00 14:30:33 13:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:51:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:51:01 ens3 0.53 0.10 0.16 0.16 0.00 0.00 0.00 0.00 14:30:33 13:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:52:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 13:52:01 ens3 0.33 0.20 0.09 0.17 0.00 0.00 0.00 0.00 14:30:33 13:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:53:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:53:01 ens3 0.35 0.15 0.13 0.35 0.00 0.00 0.00 0.00 14:30:33 13:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:54:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 13:54:01 ens3 0.23 0.15 0.06 0.19 0.00 0.00 0.00 0.00 14:30:33 13:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:55:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:55:01 ens3 0.32 0.10 0.17 0.16 0.00 0.00 0.00 0.00 14:30:33 13:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:56:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 13:56:01 ens3 0.23 0.23 0.06 0.33 0.00 0.00 0.00 0.00 14:30:33 13:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:57:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:57:01 ens3 0.27 0.08 0.06 0.16 0.00 0.00 0.00 0.00 14:30:33 13:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:58:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 13:58:01 ens3 0.57 0.27 0.20 0.08 0.00 0.00 0.00 0.00 14:30:33 13:59:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:59:02 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 13:59:02 ens3 1.10 0.83 0.62 1.27 0.00 0.00 0.00 0.00 14:30:33 14:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:00:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 14:00:01 ens3 0.34 0.20 0.07 0.21 0.00 0.00 0.00 0.00 14:30:33 14:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:01:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:01:01 ens3 0.22 0.13 0.06 0.32 0.00 0.00 0.00 0.00 14:30:33 14:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:02:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 14:02:01 ens3 1.00 0.60 0.91 0.34 0.00 0.00 0.00 0.00 14:30:33 14:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:03:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:03:01 ens3 11.31 10.23 6.03 15.59 0.00 0.00 0.00 0.00 14:30:33 14:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:04:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 14:04:01 ens3 0.42 0.20 0.07 0.26 0.00 0.00 0.00 0.00 14:30:33 14:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:05:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:05:01 ens3 1.13 0.58 0.42 0.70 0.00 0.00 0.00 0.00 14:30:33 14:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:06:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 14:06:01 ens3 0.23 0.15 0.06 0.07 0.00 0.00 0.00 0.00 14:30:33 14:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:07:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:07:01 ens3 0.23 0.08 0.06 0.17 0.00 0.00 0.00 0.00 14:30:33 14:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:08:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 14:08:01 ens3 0.33 0.22 0.13 0.04 0.00 0.00 0.00 0.00 14:30:33 14:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:09:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:09:01 ens3 0.30 0.20 0.11 0.37 0.00 0.00 0.00 0.00 14:30:33 14:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:10:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 14:10:01 ens3 0.23 0.20 0.06 0.18 0.00 0.00 0.00 0.00 14:30:33 14:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:11:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:11:01 ens3 0.20 0.18 0.06 0.31 0.00 0.00 0.00 0.00 14:30:33 14:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:12:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 14:12:01 ens3 0.20 0.20 0.06 0.21 0.00 0.00 0.00 0.00 14:30:33 14:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:13:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:13:01 ens3 0.43 0.37 0.15 0.23 0.00 0.00 0.00 0.00 14:30:33 14:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:14:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 14:14:01 ens3 0.23 0.25 0.06 0.24 0.00 0.00 0.00 0.00 14:30:33 14:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:15:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:15:01 ens3 0.38 0.25 0.18 0.18 0.00 0.00 0.00 0.00 14:30:33 14:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:16:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 14:16:01 ens3 0.22 0.25 0.06 0.02 0.00 0.00 0.00 0.00 14:30:33 14:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:17:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:17:01 ens3 0.25 0.23 0.06 0.35 0.00 0.00 0.00 0.00 14:30:33 14:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:18:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 14:18:01 ens3 0.67 0.22 0.18 0.08 0.00 0.00 0.00 0.00 14:30:33 14:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:19:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:19:01 ens3 0.90 0.85 0.52 1.05 0.00 0.00 0.00 0.00 14:30:33 14:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:20:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 14:20:01 ens3 0.23 0.28 0.06 0.36 0.00 0.00 0.00 0.00 14:30:33 14:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:21:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:21:01 ens3 0.27 0.08 0.06 0.01 0.00 0.00 0.00 0.00 14:30:33 14:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:22:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 14:22:01 ens3 0.23 0.25 0.06 0.35 0.00 0.00 0.00 0.00 14:30:33 14:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:23:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:23:01 ens3 0.28 0.22 0.13 0.07 0.00 0.00 0.00 0.00 14:30:33 14:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:24:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 14:24:01 ens3 0.23 0.25 0.06 0.32 0.00 0.00 0.00 0.00 14:30:33 14:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:25:01 lo 0.93 0.93 0.10 0.10 0.00 0.00 0.00 0.00 14:30:33 14:25:01 ens3 148.11 105.87 999.45 46.71 0.00 0.00 0.00 0.00 14:30:33 14:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:26:01 lo 14.66 14.66 1.42 1.42 0.00 0.00 0.00 0.00 14:30:33 14:26:01 ens3 1328.56 752.17 33324.99 63.26 0.00 0.00 0.00 0.00 14:30:33 14:27:01 veth66e193a 0.38 0.50 0.02 0.03 0.00 0.00 0.00 0.00 14:30:33 14:27:01 veth8160a6a 0.38 0.55 0.02 0.03 0.00 0.00 0.00 0.00 14:30:33 14:27:01 veth83c1471 0.00 0.30 0.00 0.02 0.00 0.00 0.00 0.00 14:30:33 14:27:01 br-4e561b93ce74 0.80 0.60 0.07 0.37 0.00 0.00 0.00 0.00 14:30:33 14:28:01 veth66e193a 3.58 4.47 0.71 0.46 0.00 0.00 0.00 0.00 14:30:33 14:28:01 veth8160a6a 11.05 11.76 2.18 1.81 0.00 0.00 0.00 0.00 14:30:33 14:28:01 veth83c1471 0.00 0.12 0.00 0.01 0.00 0.00 0.00 0.00 14:30:33 14:28:01 br-4e561b93ce74 0.50 0.57 0.05 0.04 0.00 0.00 0.00 0.00 14:30:33 14:29:01 veth66e193a 3.43 4.95 0.86 0.39 0.00 0.00 0.00 0.00 14:30:33 14:29:01 veth8160a6a 6.30 9.23 1.46 0.71 0.00 0.00 0.00 0.00 14:30:33 14:29:01 veth83c1471 0.00 0.05 0.00 0.00 0.00 0.00 0.00 0.00 14:30:33 14:29:01 br-4e561b93ce74 0.27 0.10 0.01 0.01 0.00 0.00 0.00 0.00 14:30:33 14:30:01 docker0 12.01 16.81 2.06 284.16 0.00 0.00 0.00 0.00 14:30:33 14:30:01 lo 31.23 31.23 2.78 2.78 0.00 0.00 0.00 0.00 14:30:33 14:30:01 ens3 1813.81 1109.53 36171.29 211.02 0.00 0.00 0.00 0.00 14:30:33 Average: docker0 0.25 0.35 0.04 5.92 0.00 0.00 0.00 0.00 14:30:33 Average: lo 0.57 0.57 0.05 0.05 0.00 0.00 0.00 0.00 14:30:33 Average: ens3 37.57 22.93 752.63 4.38 0.00 0.00 0.00 0.00 14:30:33 14:30:33 14:30:33 ---> sar -P ALL: 14:30:33 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21085) 07/03/24 _x86_64_ (8 CPU) 14:30:33 14:30:33 13:41:27 LINUX RESTART (8 CPU) 14:30:33 14:30:33 13:42:02 CPU %user %nice %system %iowait %steal %idle 14:30:33 13:43:01 all 1.83 0.00 0.23 1.02 0.01 96.90 14:30:33 13:43:01 0 2.43 0.00 0.39 7.00 0.03 90.14 14:30:33 13:43:01 1 4.44 0.00 0.24 0.02 0.02 95.29 14:30:33 13:43:01 2 1.34 0.00 0.29 0.64 0.00 97.73 14:30:33 13:43:01 3 2.12 0.00 0.20 0.25 0.00 97.42 14:30:33 13:43:01 4 0.93 0.00 0.20 0.07 0.00 98.79 14:30:33 13:43:01 5 0.53 0.00 0.07 0.19 0.02 99.20 14:30:33 13:43:01 6 1.27 0.00 0.25 0.00 0.00 98.47 14:30:33 13:43:01 7 1.61 0.00 0.17 0.00 0.02 98.20 14:30:33 13:44:01 all 0.17 0.00 0.01 0.91 0.00 98.91 14:30:33 13:44:01 0 1.26 0.00 0.03 7.02 0.03 91.65 14:30:33 13:44:01 1 0.03 0.00 0.00 0.00 0.00 99.97 14:30:33 13:44:01 2 0.00 0.00 0.00 0.02 0.00 99.98 14:30:33 13:44:01 3 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:44:01 4 0.00 0.00 0.00 0.00 0.02 99.98 14:30:33 13:44:01 5 0.03 0.00 0.02 0.20 0.00 99.75 14:30:33 13:44:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:44:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:45:01 all 0.03 0.00 0.01 0.89 0.00 99.07 14:30:33 13:45:01 0 0.03 0.00 0.02 7.12 0.03 92.79 14:30:33 13:45:01 1 0.07 0.00 0.00 0.00 0.00 99.93 14:30:33 13:45:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:45:01 3 0.02 0.00 0.02 0.00 0.00 99.97 14:30:33 13:45:01 4 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:45:01 5 0.05 0.00 0.00 0.03 0.00 99.92 14:30:33 13:45:01 6 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:45:01 7 0.03 0.00 0.00 0.00 0.00 99.97 14:30:33 13:46:01 all 0.03 0.00 0.01 0.98 0.01 98.97 14:30:33 13:46:01 0 0.03 0.00 0.05 7.81 0.03 92.08 14:30:33 13:46:01 1 0.05 0.00 0.00 0.00 0.00 99.95 14:30:33 13:46:01 2 0.03 0.00 0.02 0.00 0.00 99.95 14:30:33 13:46:01 3 0.02 0.00 0.02 0.00 0.00 99.97 14:30:33 13:46:01 4 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:46:01 5 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:46:01 6 0.03 0.00 0.02 0.08 0.00 99.87 14:30:33 13:46:01 7 0.03 0.00 0.00 0.00 0.00 99.97 14:30:33 13:47:01 all 0.13 0.00 0.01 0.95 0.00 98.91 14:30:33 13:47:01 0 1.00 0.00 0.03 7.55 0.02 91.40 14:30:33 13:47:01 1 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:47:01 2 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 13:47:01 3 0.03 0.00 0.02 0.00 0.00 99.95 14:30:33 13:47:01 4 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 13:47:01 5 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:47:01 6 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 13:47:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:48:01 all 0.06 0.00 0.01 0.85 0.00 99.08 14:30:33 13:48:01 0 0.42 0.00 0.03 6.78 0.03 92.74 14:30:33 13:48:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:48:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:48:01 3 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:48:01 4 0.00 0.00 0.02 0.00 0.02 99.97 14:30:33 13:48:01 5 0.02 0.00 0.02 0.00 0.00 99.97 14:30:33 13:48:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:48:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:49:01 all 0.05 0.00 0.00 0.93 0.01 99.02 14:30:33 13:49:01 0 0.08 0.00 0.02 7.48 0.03 92.39 14:30:33 13:49:01 1 0.15 0.00 0.00 0.00 0.00 99.85 14:30:33 13:49:01 2 0.03 0.00 0.02 0.00 0.00 99.95 14:30:33 13:49:01 3 0.10 0.00 0.00 0.00 0.02 99.88 14:30:33 13:49:01 4 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:49:01 5 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:49:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:49:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:50:01 all 0.14 0.00 0.01 1.09 0.00 98.76 14:30:33 13:50:01 0 1.03 0.00 0.02 8.68 0.03 90.24 14:30:33 13:50:01 1 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:50:01 2 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:50:01 3 0.02 0.00 0.03 0.00 0.00 99.95 14:30:33 13:50:01 4 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:50:01 5 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:50:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:50:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:51:01 all 0.03 0.00 0.01 0.66 0.00 99.29 14:30:33 13:51:01 0 0.17 0.00 0.05 5.35 0.03 94.40 14:30:33 13:51:01 1 0.03 0.00 0.00 0.00 0.00 99.97 14:30:33 13:51:01 2 0.02 0.00 0.02 0.00 0.02 99.95 14:30:33 13:51:01 3 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:51:01 4 0.00 0.00 0.00 0.00 0.02 99.98 14:30:33 13:51:01 5 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:51:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:51:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:52:01 all 0.01 0.00 0.01 0.00 0.00 99.97 14:30:33 13:52:01 0 0.05 0.00 0.00 0.03 0.02 99.90 14:30:33 13:52:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:52:01 2 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 13:52:01 3 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:52:01 4 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 13:52:01 5 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 13:52:01 6 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:52:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:53:01 all 0.01 0.00 0.00 0.00 0.00 99.98 14:30:33 13:53:01 0 0.05 0.00 0.02 0.02 0.02 99.90 14:30:33 13:53:01 1 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:53:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:53:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:53:01 4 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:53:01 5 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:53:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:53:01 7 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:30:33 13:53:01 CPU %user %nice %system %iowait %steal %idle 14:30:33 13:54:01 all 0.01 0.00 0.01 0.00 0.00 99.98 14:30:33 13:54:01 0 0.07 0.00 0.02 0.03 0.03 99.85 14:30:33 13:54:01 1 0.02 0.00 0.02 0.00 0.02 99.95 14:30:33 13:54:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:54:01 3 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 13:54:01 4 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:54:01 5 0.02 0.00 0.02 0.00 0.00 99.97 14:30:33 13:54:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:54:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:55:01 all 0.01 0.00 0.00 0.00 0.00 99.98 14:30:33 13:55:01 0 0.03 0.00 0.02 0.02 0.02 99.92 14:30:33 13:55:01 1 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:55:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:55:01 3 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:55:01 4 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:55:01 5 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:55:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:55:01 7 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:56:01 all 0.05 0.00 0.00 0.00 0.00 99.94 14:30:33 13:56:01 0 0.33 0.00 0.00 0.02 0.03 99.62 14:30:33 13:56:01 1 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:56:01 2 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 13:56:01 3 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 13:56:01 4 0.02 0.00 0.00 0.00 0.02 99.97 14:30:33 13:56:01 5 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:56:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:56:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:57:01 all 0.26 0.00 0.02 0.01 0.00 99.71 14:30:33 13:57:01 0 2.04 0.00 0.05 0.03 0.02 97.87 14:30:33 13:57:01 1 0.03 0.00 0.00 0.00 0.00 99.97 14:30:33 13:57:01 2 0.03 0.00 0.03 0.03 0.00 99.90 14:30:33 13:57:01 3 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 13:57:01 4 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:57:01 5 0.00 0.00 0.05 0.00 0.00 99.95 14:30:33 13:57:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:57:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:58:01 all 0.26 0.00 0.01 0.00 0.00 99.72 14:30:33 13:58:01 0 2.04 0.00 0.03 0.03 0.02 97.88 14:30:33 13:58:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:58:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:58:01 3 0.02 0.00 0.02 0.00 0.00 99.97 14:30:33 13:58:01 4 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:58:01 5 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 13:58:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:58:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:59:02 all 0.03 0.00 0.01 0.00 0.00 99.95 14:30:33 13:59:02 0 0.18 0.00 0.03 0.02 0.02 99.75 14:30:33 13:59:02 1 0.05 0.00 0.00 0.00 0.00 99.95 14:30:33 13:59:02 2 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:59:02 3 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 13:59:02 4 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:59:02 5 0.02 0.00 0.02 0.00 0.00 99.97 14:30:33 13:59:02 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 13:59:02 7 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:00:01 all 0.03 0.00 0.01 0.00 0.00 99.95 14:30:33 14:00:01 0 0.24 0.00 0.03 0.03 0.03 99.66 14:30:33 14:00:01 1 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:00:01 2 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:00:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:00:01 4 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 14:00:01 5 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:00:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:00:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:01:01 all 0.11 0.00 0.01 0.00 0.00 99.88 14:30:33 14:01:01 0 0.83 0.00 0.02 0.02 0.00 99.14 14:30:33 14:01:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:01:01 2 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 14:01:01 3 0.02 0.00 0.02 0.00 0.00 99.97 14:30:33 14:01:01 4 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:01:01 5 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:01:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:01:01 7 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 14:02:01 all 0.10 0.00 0.00 0.07 0.00 99.82 14:30:33 14:02:01 0 0.23 0.00 0.00 0.52 0.02 99.23 14:30:33 14:02:01 1 0.13 0.00 0.00 0.00 0.00 99.87 14:30:33 14:02:01 2 0.05 0.00 0.00 0.00 0.00 99.95 14:30:33 14:02:01 3 0.18 0.00 0.03 0.05 0.00 99.73 14:30:33 14:02:01 4 0.07 0.00 0.00 0.00 0.02 99.92 14:30:33 14:02:01 5 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:02:01 6 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:02:01 7 0.05 0.00 0.02 0.00 0.02 99.92 14:30:33 14:03:01 all 0.60 0.00 0.03 0.01 0.01 99.35 14:30:33 14:03:01 0 0.23 0.00 0.05 0.08 0.00 99.63 14:30:33 14:03:01 1 2.15 0.00 0.03 0.02 0.00 97.80 14:30:33 14:03:01 2 1.42 0.00 0.00 0.02 0.00 98.57 14:30:33 14:03:01 3 0.37 0.00 0.03 0.00 0.02 99.58 14:30:33 14:03:01 4 0.05 0.00 0.00 0.00 0.00 99.95 14:30:33 14:03:01 5 0.18 0.00 0.02 0.00 0.00 99.80 14:30:33 14:03:01 6 0.07 0.00 0.05 0.00 0.00 99.88 14:30:33 14:03:01 7 0.35 0.00 0.00 0.00 0.02 99.63 14:30:33 14:04:01 all 0.16 0.00 0.01 0.01 0.00 99.82 14:30:33 14:04:01 0 0.03 0.00 0.00 0.07 0.02 99.88 14:30:33 14:04:01 1 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:04:01 2 1.22 0.00 0.02 0.00 0.00 98.76 14:30:33 14:04:01 3 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 14:04:01 4 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 14:04:01 5 0.00 0.00 0.02 0.00 0.02 99.97 14:30:33 14:04:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:04:01 7 0.02 0.00 0.02 0.00 0.02 99.95 14:30:33 14:30:33 14:04:01 CPU %user %nice %system %iowait %steal %idle 14:30:33 14:05:01 all 0.01 0.00 0.00 0.02 0.00 99.97 14:30:33 14:05:01 0 0.00 0.00 0.00 0.13 0.00 99.87 14:30:33 14:05:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:05:01 2 0.00 0.00 0.00 0.00 0.02 99.98 14:30:33 14:05:01 3 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:05:01 4 0.02 0.00 0.00 0.00 0.02 99.97 14:30:33 14:05:01 5 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:05:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:05:01 7 0.00 0.00 0.02 0.00 0.02 99.97 14:30:33 14:06:01 all 0.02 0.00 0.01 0.02 0.00 99.95 14:30:33 14:06:01 0 0.03 0.00 0.02 0.17 0.02 99.77 14:30:33 14:06:01 1 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:06:01 2 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:06:01 3 0.05 0.00 0.00 0.00 0.00 99.95 14:30:33 14:06:01 4 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:06:01 5 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:06:01 6 0.02 0.00 0.02 0.00 0.00 99.97 14:30:33 14:06:01 7 0.05 0.00 0.02 0.00 0.02 99.92 14:30:33 14:07:01 all 0.01 0.00 0.00 0.00 0.00 99.98 14:30:33 14:07:01 0 0.03 0.00 0.03 0.03 0.00 99.90 14:30:33 14:07:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:07:01 2 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:07:01 3 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 14:07:01 4 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:07:01 5 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:07:01 6 0.00 0.00 0.00 0.00 0.02 99.98 14:30:33 14:07:01 7 0.02 0.00 0.00 0.00 0.02 99.97 14:30:33 14:08:01 all 0.15 0.00 0.01 0.01 0.00 99.83 14:30:33 14:08:01 0 0.02 0.00 0.02 0.08 0.00 99.88 14:30:33 14:08:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:08:01 2 1.06 0.00 0.02 0.00 0.00 98.92 14:30:33 14:08:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:08:01 4 0.00 0.00 0.00 0.00 0.02 99.98 14:30:33 14:08:01 5 0.03 0.00 0.03 0.00 0.00 99.93 14:30:33 14:08:01 6 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:08:01 7 0.03 0.00 0.02 0.00 0.02 99.93 14:30:33 14:09:01 all 0.01 0.00 0.01 0.01 0.00 99.97 14:30:33 14:09:01 0 0.02 0.00 0.02 0.05 0.00 99.92 14:30:33 14:09:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:09:01 2 0.00 0.00 0.00 0.00 0.02 99.98 14:30:33 14:09:01 3 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 14:09:01 4 0.02 0.00 0.02 0.00 0.02 99.95 14:30:33 14:09:01 5 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:09:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:09:01 7 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:10:01 all 0.01 0.00 0.01 0.00 0.00 99.97 14:30:33 14:10:01 0 0.03 0.00 0.02 0.02 0.02 99.92 14:30:33 14:10:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:10:01 2 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 14:10:01 3 0.02 0.00 0.02 0.00 0.00 99.97 14:30:33 14:10:01 4 0.02 0.00 0.02 0.02 0.02 99.93 14:30:33 14:10:01 5 0.03 0.00 0.00 0.00 0.00 99.97 14:30:33 14:10:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:10:01 7 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:11:01 all 0.01 0.00 0.01 0.00 0.00 99.97 14:30:33 14:11:01 0 0.02 0.00 0.02 0.00 0.00 99.97 14:30:33 14:11:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:11:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:11:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:11:01 4 0.02 0.00 0.02 0.00 0.02 99.95 14:30:33 14:11:01 5 0.02 0.00 0.02 0.00 0.02 99.95 14:30:33 14:11:01 6 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 14:11:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:12:01 all 0.01 0.00 0.01 0.00 0.00 99.98 14:30:33 14:12:01 0 0.02 0.00 0.02 0.02 0.00 99.95 14:30:33 14:12:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:12:01 2 0.03 0.00 0.00 0.00 0.00 99.97 14:30:33 14:12:01 3 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:12:01 4 0.02 0.00 0.03 0.00 0.02 99.93 14:30:33 14:12:01 5 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 14:12:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:12:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:13:01 all 0.01 0.00 0.01 0.18 0.00 99.79 14:30:33 14:13:01 0 0.02 0.00 0.02 1.47 0.00 98.50 14:30:33 14:13:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:13:01 2 0.00 0.00 0.02 0.00 0.02 99.97 14:30:33 14:13:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:13:01 4 0.03 0.00 0.02 0.00 0.02 99.93 14:30:33 14:13:01 5 0.02 0.00 0.02 0.00 0.00 99.97 14:30:33 14:13:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:13:01 7 0.03 0.00 0.00 0.00 0.00 99.97 14:30:33 14:14:01 all 0.02 0.00 0.00 0.01 0.00 99.97 14:30:33 14:14:01 0 0.02 0.00 0.00 0.02 0.00 99.97 14:30:33 14:14:01 1 0.00 0.00 0.00 0.03 0.00 99.97 14:30:33 14:14:01 2 0.03 0.00 0.00 0.00 0.00 99.97 14:30:33 14:14:01 3 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:14:01 4 0.03 0.00 0.02 0.00 0.02 99.93 14:30:33 14:14:01 5 0.02 0.00 0.00 0.00 0.02 99.97 14:30:33 14:14:01 6 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:14:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:15:01 all 0.01 0.00 0.01 0.00 0.00 99.98 14:30:33 14:15:01 0 0.02 0.00 0.00 0.02 0.00 99.97 14:30:33 14:15:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:15:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:15:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:15:01 4 0.03 0.00 0.02 0.00 0.00 99.95 14:30:33 14:15:01 5 0.03 0.00 0.02 0.00 0.00 99.95 14:30:33 14:15:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:15:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:30:33 14:15:01 CPU %user %nice %system %iowait %steal %idle 14:30:33 14:16:01 all 0.01 0.00 0.00 0.00 0.00 99.97 14:30:33 14:16:01 0 0.02 0.00 0.02 0.02 0.00 99.95 14:30:33 14:16:01 1 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:16:01 2 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:16:01 3 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 14:16:01 4 0.05 0.00 0.02 0.00 0.03 99.90 14:30:33 14:16:01 5 0.03 0.00 0.02 0.00 0.00 99.95 14:30:33 14:16:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:16:01 7 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:17:01 all 0.01 0.00 0.01 0.00 0.00 99.97 14:30:33 14:17:01 0 0.02 0.00 0.00 0.02 0.00 99.97 14:30:33 14:17:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:17:01 2 0.00 0.00 0.00 0.00 0.02 99.98 14:30:33 14:17:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:17:01 4 0.05 0.00 0.02 0.00 0.02 99.92 14:30:33 14:17:01 5 0.03 0.00 0.00 0.00 0.00 99.97 14:30:33 14:17:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:17:01 7 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 14:18:01 all 0.01 0.00 0.01 0.01 0.00 99.97 14:30:33 14:18:01 0 0.05 0.00 0.02 0.05 0.00 99.88 14:30:33 14:18:01 1 0.00 0.00 0.00 0.00 0.02 99.98 14:30:33 14:18:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:18:01 3 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 14:18:01 4 0.03 0.00 0.03 0.00 0.02 99.92 14:30:33 14:18:01 5 0.02 0.00 0.02 0.00 0.00 99.97 14:30:33 14:18:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:18:01 7 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 14:19:01 all 0.01 0.00 0.01 0.00 0.00 99.97 14:30:33 14:19:01 0 0.02 0.00 0.00 0.03 0.02 99.93 14:30:33 14:19:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:19:01 2 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 14:19:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:19:01 4 0.07 0.00 0.00 0.00 0.02 99.92 14:30:33 14:19:01 5 0.02 0.00 0.00 0.00 0.02 99.97 14:30:33 14:19:01 6 0.02 0.00 0.02 0.00 0.00 99.97 14:30:33 14:19:01 7 0.00 0.00 0.00 0.00 0.02 99.98 14:30:33 14:20:01 all 0.05 0.00 0.01 0.00 0.00 99.93 14:30:33 14:20:01 0 0.02 0.00 0.00 0.02 0.00 99.97 14:30:33 14:20:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:20:01 2 0.32 0.00 0.02 0.00 0.02 99.65 14:30:33 14:20:01 3 0.02 0.00 0.02 0.00 0.00 99.97 14:30:33 14:20:01 4 0.03 0.00 0.02 0.00 0.02 99.93 14:30:33 14:20:01 5 0.03 0.00 0.02 0.00 0.00 99.95 14:30:33 14:20:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:20:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:21:01 all 0.26 0.00 0.00 0.00 0.00 99.73 14:30:33 14:21:01 0 0.02 0.00 0.02 0.02 0.00 99.95 14:30:33 14:21:01 1 0.00 0.00 0.00 0.02 0.00 99.98 14:30:33 14:21:01 2 1.97 0.00 0.00 0.00 0.00 98.03 14:30:33 14:21:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:21:01 4 0.02 0.00 0.00 0.00 0.02 99.97 14:30:33 14:21:01 5 0.02 0.00 0.02 0.00 0.00 99.97 14:30:33 14:21:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:21:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:22:01 all 0.09 0.00 0.01 0.00 0.01 99.89 14:30:33 14:22:01 0 0.03 0.00 0.02 0.02 0.00 99.93 14:30:33 14:22:01 1 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:22:01 2 0.68 0.00 0.00 0.00 0.00 99.32 14:30:33 14:22:01 3 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:22:01 4 0.03 0.00 0.02 0.00 0.02 99.93 14:30:33 14:22:01 5 0.02 0.00 0.00 0.00 0.02 99.97 14:30:33 14:22:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:22:01 7 0.00 0.00 0.02 0.00 0.00 99.98 14:30:33 14:23:01 all 0.02 0.00 0.01 0.00 0.00 99.97 14:30:33 14:23:01 0 0.03 0.00 0.00 0.03 0.00 99.93 14:30:33 14:23:01 1 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:23:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:23:01 3 0.02 0.00 0.02 0.00 0.00 99.97 14:30:33 14:23:01 4 0.03 0.00 0.02 0.00 0.03 99.92 14:30:33 14:23:01 5 0.02 0.00 0.03 0.00 0.00 99.95 14:30:33 14:23:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:23:01 7 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:24:01 all 0.01 0.00 0.00 0.00 0.00 99.98 14:30:33 14:24:01 0 0.00 0.00 0.00 0.02 0.02 99.97 14:30:33 14:24:01 1 0.00 0.00 0.00 0.02 0.00 99.98 14:30:33 14:24:01 2 0.00 0.00 0.00 0.00 0.02 99.98 14:30:33 14:24:01 3 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:24:01 4 0.02 0.00 0.00 0.00 0.02 99.97 14:30:33 14:24:01 5 0.02 0.00 0.00 0.00 0.00 99.98 14:30:33 14:24:01 6 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:24:01 7 0.00 0.00 0.00 0.00 0.00 100.00 14:30:33 14:25:01 all 6.17 0.00 0.68 1.19 0.02 91.94 14:30:33 14:25:01 0 6.07 0.00 0.80 3.68 0.02 89.43 14:30:33 14:25:01 1 2.87 0.00 0.27 0.30 0.02 96.54 14:30:33 14:25:01 2 0.77 0.00 0.63 0.10 0.02 98.48 14:30:33 14:25:01 3 1.52 0.00 0.33 0.03 0.02 98.10 14:30:33 14:25:01 4 20.64 0.00 1.05 1.28 0.03 76.99 14:30:33 14:25:01 5 9.22 0.00 0.70 0.27 0.03 89.78 14:30:33 14:25:01 6 6.20 0.00 0.82 2.04 0.03 90.91 14:30:33 14:25:01 7 2.05 0.00 0.83 1.82 0.00 95.30 14:30:33 14:26:01 all 15.58 0.00 4.90 5.71 0.07 73.74 14:30:33 14:26:01 0 17.28 0.00 4.92 2.03 0.03 75.74 14:30:33 14:26:01 1 20.99 0.00 5.15 8.15 0.07 65.64 14:30:33 14:26:01 2 11.61 0.00 4.60 2.46 0.07 81.27 14:30:33 14:26:01 3 10.18 0.00 4.98 3.69 0.10 81.05 14:30:33 14:26:01 4 30.79 0.00 5.28 4.28 0.07 59.59 14:30:33 14:26:01 5 11.88 0.00 4.85 12.77 0.05 70.46 14:30:33 14:26:01 6 10.08 0.00 5.41 2.26 0.08 82.16 14:30:33 14:26:01 7 12.53 0.00 4.03 9.97 0.05 73.42 14:30:33 14:30:33 14:26:01 CPU %user %nice %system %iowait %steal %idle 14:30:33 14:27:01 all 8.50 0.00 2.78 13.84 0.05 74.83 14:30:33 14:27:01 0 8.59 0.00 2.88 5.56 0.05 82.91 14:30:33 14:27:01 1 6.70 0.00 2.88 12.49 0.03 77.90 14:30:33 14:27:01 2 6.67 0.00 2.83 7.90 0.05 82.55 14:30:33 14:27:01 3 9.64 0.00 2.60 2.96 0.07 84.74 14:30:33 14:27:01 4 9.06 0.00 3.48 59.02 0.10 28.33 14:30:33 14:27:01 5 9.28 0.00 2.56 21.17 0.03 66.96 14:30:33 14:27:01 6 9.98 0.00 2.37 1.09 0.03 86.52 14:30:33 14:27:01 7 8.10 0.00 2.60 0.91 0.03 88.36 14:30:33 14:28:01 all 21.96 0.00 2.46 1.04 0.08 74.46 14:30:33 14:28:01 0 23.39 0.00 3.32 0.10 0.07 73.12 14:30:33 14:28:01 1 19.03 0.00 2.03 2.80 0.10 76.04 14:30:33 14:28:01 2 24.74 0.00 2.58 0.65 0.08 71.94 14:30:33 14:28:01 3 21.04 0.00 2.27 0.85 0.07 75.77 14:30:33 14:28:01 4 17.06 0.00 1.78 0.17 0.08 80.91 14:30:33 14:28:01 5 24.37 0.00 2.52 1.12 0.07 71.92 14:30:33 14:28:01 6 20.97 0.00 2.45 2.63 0.08 73.86 14:30:33 14:28:01 7 25.08 0.00 2.70 0.02 0.08 72.12 14:30:33 14:29:01 all 4.36 0.00 1.03 0.95 0.05 93.61 14:30:33 14:29:01 0 3.68 0.00 0.92 0.10 0.03 95.27 14:30:33 14:29:01 1 5.55 0.00 0.94 0.00 0.03 93.48 14:30:33 14:29:01 2 5.78 0.00 1.37 0.80 0.07 91.98 14:30:33 14:29:01 3 2.84 0.00 0.89 0.38 0.07 95.82 14:30:33 14:29:01 4 4.90 0.00 1.17 1.78 0.05 92.10 14:30:33 14:29:01 5 4.00 0.00 0.75 0.03 0.03 95.18 14:30:33 14:29:01 6 3.31 0.00 0.97 3.85 0.03 91.83 14:30:33 14:29:01 7 4.84 0.00 1.24 0.62 0.08 93.21 14:30:33 14:30:01 all 2.47 0.00 0.70 0.16 0.04 96.62 14:30:33 14:30:01 0 2.33 0.00 0.87 0.05 0.05 96.70 14:30:33 14:30:01 1 1.57 0.00 0.60 0.13 0.02 97.68 14:30:33 14:30:01 2 2.67 0.00 0.92 0.20 0.05 96.16 14:30:33 14:30:01 3 3.79 0.00 0.82 0.13 0.03 95.23 14:30:33 14:30:01 4 3.34 0.00 0.60 0.20 0.03 95.83 14:30:33 14:30:01 5 1.92 0.00 0.53 0.17 0.03 97.34 14:30:33 14:30:01 6 2.19 0.00 0.58 0.05 0.03 97.15 14:30:33 14:30:01 7 1.97 0.00 0.70 0.40 0.07 96.86 14:30:33 Average: all 1.32 0.00 0.27 0.65 0.01 97.74 14:30:33 Average: 0 1.55 0.00 0.31 1.65 0.02 96.47 14:30:33 Average: 1 1.32 0.00 0.25 0.50 0.01 97.92 14:30:33 Average: 2 1.26 0.00 0.28 0.27 0.01 98.19 14:30:33 Average: 3 1.08 0.00 0.26 0.17 0.01 98.48 14:30:33 Average: 4 1.79 0.00 0.28 1.37 0.02 96.54 14:30:33 Average: 5 1.29 0.00 0.26 0.75 0.01 97.70 14:30:33 Average: 6 1.12 0.00 0.27 0.25 0.01 98.35 14:30:33 Average: 7 1.18 0.00 0.26 0.28 0.01 98.27 14:30:33 14:30:33 14:30:33