23:10:56 Started by timer 23:10:56 Running as SYSTEM 23:10:56 [EnvInject] - Loading node environment variables. 23:10:56 Building remotely on prd-ubuntu1804-docker-8c-8g-15168 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:10:56 [ssh-agent] Looking for ssh-agent implementation... 23:10:57 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:10:57 $ ssh-agent 23:10:57 SSH_AUTH_SOCK=/tmp/ssh-PgdznfpJiXk2/agent.2053 23:10:57 SSH_AGENT_PID=2055 23:10:57 [ssh-agent] Started. 23:10:57 Running ssh-add (command line suppressed) 23:10:57 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_16148026302590831665.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_16148026302590831665.key) 23:10:57 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:10:57 The recommended git tool is: NONE 23:10:59 using credential onap-jenkins-ssh 23:10:59 Wiping out workspace first. 23:10:59 Cloning the remote Git repository 23:10:59 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:10:59 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:10:59 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:10:59 > git --version # timeout=10 23:10:59 > git --version # 'git version 2.17.1' 23:10:59 using GIT_SSH to set credentials Gerrit user 23:10:59 Verifying host key using manually-configured host key entries 23:10:59 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:10:59 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:10:59 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:10:59 Avoid second fetch 23:10:59 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:10:59 Checking out Revision 59c25aa2599da95a52baffe261065f84f2bf7e20 (refs/remotes/origin/master) 23:10:59 > git config core.sparsecheckout # timeout=10 23:10:59 > git checkout -f 59c25aa2599da95a52baffe261065f84f2bf7e20 # timeout=30 23:11:00 Commit message: "Add a new replica table in clampacm database in db migrator" 23:11:00 > git rev-list --no-walk c27bece1c12e93c2f780f80e1bdc12d5f53fd10f # timeout=10 23:11:03 provisioning config files... 23:11:03 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:11:03 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:11:03 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14529675982004088154.sh 23:11:03 ---> python-tools-install.sh 23:11:03 Setup pyenv: 23:11:03 * system (set by /opt/pyenv/version) 23:11:03 * 3.8.13 (set by /opt/pyenv/version) 23:11:03 * 3.9.13 (set by /opt/pyenv/version) 23:11:03 * 3.10.6 (set by /opt/pyenv/version) 23:11:07 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-5Ypn 23:11:07 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:10 lf-activate-venv(): INFO: Installing: lftools 23:11:43 lf-activate-venv(): INFO: Adding /tmp/venv-5Ypn/bin to PATH 23:11:43 Generating Requirements File 23:12:09 Python 3.10.6 23:12:10 pip 24.0 from /tmp/venv-5Ypn/lib/python3.10/site-packages/pip (python 3.10) 23:12:10 appdirs==1.4.4 23:12:10 argcomplete==3.3.0 23:12:10 aspy.yaml==1.3.0 23:12:10 attrs==23.2.0 23:12:10 autopage==0.5.2 23:12:10 beautifulsoup4==4.12.3 23:12:10 boto3==1.34.121 23:12:10 botocore==1.34.121 23:12:10 bs4==0.0.2 23:12:10 cachetools==5.3.3 23:12:10 certifi==2024.6.2 23:12:10 cffi==1.16.0 23:12:10 cfgv==3.4.0 23:12:10 chardet==5.2.0 23:12:10 charset-normalizer==3.3.2 23:12:10 click==8.1.7 23:12:10 cliff==4.7.0 23:12:10 cmd2==2.4.3 23:12:10 cryptography==3.3.2 23:12:10 debtcollector==3.0.0 23:12:10 decorator==5.1.1 23:12:10 defusedxml==0.7.1 23:12:10 Deprecated==1.2.14 23:12:10 distlib==0.3.8 23:12:10 dnspython==2.6.1 23:12:10 docker==4.2.2 23:12:10 dogpile.cache==1.3.3 23:12:10 email_validator==2.1.1 23:12:10 filelock==3.14.0 23:12:10 future==1.0.0 23:12:10 gitdb==4.0.11 23:12:10 GitPython==3.1.43 23:12:10 google-auth==2.29.0 23:12:10 httplib2==0.22.0 23:12:10 identify==2.5.36 23:12:10 idna==3.7 23:12:10 importlib-resources==1.5.0 23:12:10 iso8601==2.1.0 23:12:10 Jinja2==3.1.4 23:12:10 jmespath==1.0.1 23:12:10 jsonpatch==1.33 23:12:10 jsonpointer==2.4 23:12:10 jsonschema==4.22.0 23:12:10 jsonschema-specifications==2023.12.1 23:12:10 keystoneauth1==5.6.0 23:12:10 kubernetes==30.1.0 23:12:10 lftools==0.37.10 23:12:10 lxml==5.2.2 23:12:10 MarkupSafe==2.1.5 23:12:10 msgpack==1.0.8 23:12:10 multi_key_dict==2.0.3 23:12:10 munch==4.0.0 23:12:10 netaddr==1.3.0 23:12:10 netifaces==0.11.0 23:12:10 niet==1.4.2 23:12:10 nodeenv==1.9.1 23:12:10 oauth2client==4.1.3 23:12:10 oauthlib==3.2.2 23:12:10 openstacksdk==3.1.0 23:12:10 os-client-config==2.1.0 23:12:10 os-service-types==1.7.0 23:12:10 osc-lib==3.0.1 23:12:10 oslo.config==9.4.0 23:12:10 oslo.context==5.5.0 23:12:10 oslo.i18n==6.3.0 23:12:10 oslo.log==6.0.0 23:12:10 oslo.serialization==5.4.0 23:12:10 oslo.utils==7.1.0 23:12:10 packaging==24.0 23:12:10 pbr==6.0.0 23:12:10 platformdirs==4.2.2 23:12:10 prettytable==3.10.0 23:12:10 pyasn1==0.6.0 23:12:10 pyasn1_modules==0.4.0 23:12:10 pycparser==2.22 23:12:10 pygerrit2==2.0.15 23:12:10 PyGithub==2.3.0 23:12:10 PyJWT==2.8.0 23:12:10 PyNaCl==1.5.0 23:12:10 pyparsing==2.4.7 23:12:10 pyperclip==1.8.2 23:12:10 pyrsistent==0.20.0 23:12:10 python-cinderclient==9.5.0 23:12:10 python-dateutil==2.9.0.post0 23:12:10 python-heatclient==3.5.0 23:12:10 python-jenkins==1.8.2 23:12:10 python-keystoneclient==5.4.0 23:12:10 python-magnumclient==4.5.0 23:12:10 python-novaclient==18.6.0 23:12:10 python-openstackclient==6.6.0 23:12:10 python-swiftclient==4.6.0 23:12:10 PyYAML==6.0.1 23:12:10 referencing==0.35.1 23:12:10 requests==2.32.3 23:12:10 requests-oauthlib==2.0.0 23:12:10 requestsexceptions==1.4.0 23:12:10 rfc3986==2.0.0 23:12:10 rpds-py==0.18.1 23:12:10 rsa==4.9 23:12:10 ruamel.yaml==0.18.6 23:12:10 ruamel.yaml.clib==0.2.8 23:12:10 s3transfer==0.10.1 23:12:10 simplejson==3.19.2 23:12:10 six==1.16.0 23:12:10 smmap==5.0.1 23:12:10 soupsieve==2.5 23:12:10 stevedore==5.2.0 23:12:10 tabulate==0.9.0 23:12:10 toml==0.10.2 23:12:10 tomlkit==0.12.5 23:12:10 tqdm==4.66.4 23:12:10 typing_extensions==4.12.1 23:12:10 tzdata==2024.1 23:12:10 urllib3==1.26.18 23:12:10 virtualenv==20.26.2 23:12:10 wcwidth==0.2.13 23:12:10 websocket-client==1.8.0 23:12:10 wrapt==1.16.0 23:12:10 xdg==6.0.0 23:12:10 xmltodict==0.13.0 23:12:10 yq==3.4.3 23:12:10 [EnvInject] - Injecting environment variables from a build step. 23:12:10 [EnvInject] - Injecting as environment variables the properties content 23:12:10 SET_JDK_VERSION=openjdk17 23:12:10 GIT_URL="git://cloud.onap.org/mirror" 23:12:10 23:12:10 [EnvInject] - Variables injected successfully. 23:12:10 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins14149757334291815217.sh 23:12:10 ---> update-java-alternatives.sh 23:12:10 ---> Updating Java version 23:12:11 ---> Ubuntu/Debian system detected 23:12:11 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:12:11 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:12:11 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:12:12 openjdk version "17.0.4" 2022-07-19 23:12:12 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:12:12 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:12:12 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:12:12 [EnvInject] - Injecting environment variables from a build step. 23:12:12 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:12:12 [EnvInject] - Variables injected successfully. 23:12:12 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins5838489966804272855.sh 23:12:12 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:12:12 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:12:12 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:12:12 Configure a credential helper to remove this warning. See 23:12:12 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:12:12 23:12:12 Login Succeeded 23:12:12 docker: 'compose' is not a docker command. 23:12:12 See 'docker --help' 23:12:12 Docker Compose Plugin not installed. Installing now... 23:12:12 % Total % Received % Xferd Average Speed Time Time Time Current 23:12:12 Dload Upload Total Spent Left Speed 23:12:13 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 23:12:13 100 60.0M 100 60.0M 0 0 93.0M 0 --:--:-- --:--:-- --:--:-- 93.0M 23:12:13 Setting project configuration for: pap 23:12:13 Configuring docker compose... 23:12:15 Starting apex-pdp application with Grafana 23:12:15 grafana Pulling 23:12:15 simulator Pulling 23:12:15 pap Pulling 23:12:15 kafka Pulling 23:12:15 zookeeper Pulling 23:12:15 prometheus Pulling 23:12:15 policy-db-migrator Pulling 23:12:15 api Pulling 23:12:15 mariadb Pulling 23:12:15 apex-pdp Pulling 23:12:15 31e352740f53 Pulling fs layer 23:12:15 84b15477ea97 Pulling fs layer 23:12:15 d09e8e07fc25 Pulling fs layer 23:12:15 e578a0c624a9 Pulling fs layer 23:12:15 49d5cc175bf3 Pulling fs layer 23:12:15 73f2dcbe3502 Pulling fs layer 23:12:15 1a8530682f8a Pulling fs layer 23:12:15 bf1933dc24dc Pulling fs layer 23:12:15 7868f013c211 Pulling fs layer 23:12:15 5a89213878cf Pulling fs layer 23:12:15 6d088ee3e534 Pulling fs layer 23:12:15 39f770e5feb8 Pulling fs layer 23:12:15 7868f013c211 Waiting 23:12:15 73f2dcbe3502 Waiting 23:12:15 bf1933dc24dc Waiting 23:12:15 5a89213878cf Waiting 23:12:15 1a8530682f8a Waiting 23:12:15 6d088ee3e534 Waiting 23:12:15 e578a0c624a9 Waiting 23:12:15 49d5cc175bf3 Waiting 23:12:15 31e352740f53 Pulling fs layer 23:12:15 6ce1c588fe03 Pulling fs layer 23:12:15 36524972d691 Pulling fs layer 23:12:15 ef9c08f83372 Pulling fs layer 23:12:15 6ce1c588fe03 Waiting 23:12:15 36524972d691 Waiting 23:12:15 cc9ccd74b7df Pulling fs layer 23:12:15 acaa2331ed73 Pulling fs layer 23:12:15 b3f29df2fabd Pulling fs layer 23:12:15 ef9c08f83372 Waiting 23:12:15 b3f29df2fabd Waiting 23:12:15 acaa2331ed73 Waiting 23:12:15 cc9ccd74b7df Waiting 23:12:15 31e352740f53 Pulling fs layer 23:12:15 6ce1c588fe03 Pulling fs layer 23:12:15 f606e153f1ad Pulling fs layer 23:12:15 8026bfc2fb37 Pulling fs layer 23:12:15 67f8ce2807a6 Pulling fs layer 23:12:15 fb4c47760659 Pulling fs layer 23:12:15 4cf448219b85 Pulling fs layer 23:12:15 6ce1c588fe03 Waiting 23:12:15 f606e153f1ad Waiting 23:12:15 fb4c47760659 Waiting 23:12:15 4cf448219b85 Waiting 23:12:15 67f8ce2807a6 Waiting 23:12:15 8026bfc2fb37 Waiting 23:12:15 31e352740f53 Downloading [> ] 48.06kB/3.398MB 23:12:15 31e352740f53 Downloading [> ] 48.06kB/3.398MB 23:12:15 31e352740f53 Downloading [> ] 48.06kB/3.398MB 23:12:15 31e352740f53 Pulling fs layer 23:12:15 b358088867e5 Pulling fs layer 23:12:15 89b3be9e3a98 Pulling fs layer 23:12:15 9ddffcaebea1 Pulling fs layer 23:12:15 6a634b17ba79 Pulling fs layer 23:12:15 c796e22f7138 Pulling fs layer 23:12:15 63558e6a3d29 Pulling fs layer 23:12:15 89b3be9e3a98 Waiting 23:12:15 9ddffcaebea1 Waiting 23:12:15 6a634b17ba79 Waiting 23:12:15 c796e22f7138 Waiting 23:12:15 63558e6a3d29 Waiting 23:12:15 31e352740f53 Downloading [> ] 48.06kB/3.398MB 23:12:15 b358088867e5 Waiting 23:12:15 d09e8e07fc25 Downloading [> ] 343kB/32.98MB 23:12:15 84b15477ea97 Downloading [> ] 539.6kB/73.93MB 23:12:15 10ac4908093d Pulling fs layer 23:12:15 44779101e748 Pulling fs layer 23:12:15 10ac4908093d Waiting 23:12:15 a721db3e3f3d Pulling fs layer 23:12:15 1850a929b84a Pulling fs layer 23:12:15 44779101e748 Waiting 23:12:15 397a918c7da3 Pulling fs layer 23:12:15 806be17e856d Pulling fs layer 23:12:15 a721db3e3f3d Waiting 23:12:15 634de6c90876 Pulling fs layer 23:12:15 1850a929b84a Waiting 23:12:15 397a918c7da3 Waiting 23:12:15 cd00854cfb1a Pulling fs layer 23:12:15 806be17e856d Waiting 23:12:15 cd00854cfb1a Waiting 23:12:15 9fa9226be034 Pulling fs layer 23:12:15 1617e25568b2 Pulling fs layer 23:12:15 1b30b2d9318a Pulling fs layer 23:12:15 f6d077cd6629 Pulling fs layer 23:12:15 d6c6c26dc98a Pulling fs layer 23:12:15 60290e82ca2c Pulling fs layer 23:12:15 78605ea207be Pulling fs layer 23:12:15 869e11012e0e Pulling fs layer 23:12:15 c4426427fcc3 Pulling fs layer 23:12:15 d247d9811eae Pulling fs layer 23:12:15 f1fb904ca1b9 Pulling fs layer 23:12:15 1e12dd793eba Pulling fs layer 23:12:15 9fa9226be034 Waiting 23:12:15 1617e25568b2 Waiting 23:12:15 c4426427fcc3 Waiting 23:12:15 1b30b2d9318a Waiting 23:12:15 1e12dd793eba Waiting 23:12:15 f6d077cd6629 Waiting 23:12:15 d247d9811eae Waiting 23:12:15 d6c6c26dc98a Waiting 23:12:15 f1fb904ca1b9 Waiting 23:12:15 869e11012e0e Waiting 23:12:15 60290e82ca2c Waiting 23:12:15 78605ea207be Waiting 23:12:15 4abcf2066143 Pulling fs layer 23:12:15 17c7b7b51500 Pulling fs layer 23:12:15 3a874871ebf5 Pulling fs layer 23:12:15 4d8b5d34b1ef Pulling fs layer 23:12:15 ea2f71d64768 Pulling fs layer 23:12:15 2d8d8a45d8d1 Pulling fs layer 23:12:15 17c7b7b51500 Waiting 23:12:15 ab976f46af30 Pulling fs layer 23:12:15 8eef243a7847 Pulling fs layer 23:12:15 5d8ca4014ed0 Pulling fs layer 23:12:15 4d8b5d34b1ef Waiting 23:12:15 4abcf2066143 Waiting 23:12:15 2a0008f5c37f Pulling fs layer 23:12:15 3a874871ebf5 Waiting 23:12:15 ea2f71d64768 Waiting 23:12:15 ab976f46af30 Waiting 23:12:15 2d8d8a45d8d1 Waiting 23:12:15 2a0008f5c37f Waiting 23:12:15 31e352740f53 Verifying Checksum 23:12:15 31e352740f53 Download complete 23:12:15 31e352740f53 Download complete 23:12:15 31e352740f53 Download complete 23:12:15 31e352740f53 Download complete 23:12:15 31e352740f53 Extracting [> ] 65.54kB/3.398MB 23:12:15 31e352740f53 Extracting [> ] 65.54kB/3.398MB 23:12:15 31e352740f53 Extracting [> ] 65.54kB/3.398MB 23:12:15 31e352740f53 Extracting [> ] 65.54kB/3.398MB 23:12:15 e578a0c624a9 Downloading [==================================================>] 1.077kB/1.077kB 23:12:15 e578a0c624a9 Download complete 23:12:15 49d5cc175bf3 Downloading [============================> ] 3.003kB/5.326kB 23:12:15 49d5cc175bf3 Downloading [==================================================>] 5.326kB/5.326kB 23:12:15 49d5cc175bf3 Verifying Checksum 23:12:15 49d5cc175bf3 Download complete 23:12:15 73f2dcbe3502 Downloading [============================> ] 3.003kB/5.316kB 23:12:15 73f2dcbe3502 Downloading [==================================================>] 5.316kB/5.316kB 23:12:15 73f2dcbe3502 Download complete 23:12:15 d09e8e07fc25 Downloading [================> ] 10.66MB/32.98MB 23:12:15 84b15477ea97 Downloading [======> ] 10.27MB/73.93MB 23:12:16 1a8530682f8a Downloading [==================================================>] 1.041kB/1.041kB 23:12:16 1a8530682f8a Download complete 23:12:16 bf1933dc24dc Downloading [==================================================>] 1.039kB/1.039kB 23:12:16 bf1933dc24dc Verifying Checksum 23:12:16 bf1933dc24dc Download complete 23:12:16 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 23:12:16 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 23:12:16 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 23:12:16 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 23:12:16 7868f013c211 Downloading [==========> ] 3.002kB/13.9kB 23:12:16 7868f013c211 Download complete 23:12:16 5a89213878cf Downloading [==========> ] 3.002kB/13.78kB 23:12:16 5a89213878cf Downloading [==================================================>] 13.78kB/13.78kB 23:12:16 5a89213878cf Verifying Checksum 23:12:16 5a89213878cf Download complete 23:12:16 6d088ee3e534 Downloading [==================================================>] 2.858kB/2.858kB 23:12:16 6d088ee3e534 Download complete 23:12:16 d09e8e07fc25 Downloading [===================================> ] 23.74MB/32.98MB 23:12:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 23:12:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 23:12:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 23:12:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 23:12:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 23:12:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 23:12:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 23:12:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 23:12:16 84b15477ea97 Downloading [================> ] 24.87MB/73.93MB 23:12:16 39f770e5feb8 Downloading [==================================================>] 2.867kB/2.867kB 23:12:16 39f770e5feb8 Verifying Checksum 23:12:16 39f770e5feb8 Download complete 23:12:16 31e352740f53 Pull complete 23:12:16 31e352740f53 Pull complete 23:12:16 31e352740f53 Pull complete 23:12:16 31e352740f53 Pull complete 23:12:16 d09e8e07fc25 Verifying Checksum 23:12:16 d09e8e07fc25 Download complete 23:12:16 36524972d691 Download complete 23:12:16 22ebf0e44c85 Pulling fs layer 23:12:16 00b33c871d26 Pulling fs layer 23:12:16 6b11e56702ad Pulling fs layer 23:12:16 53d69aa7d3fc Pulling fs layer 23:12:16 a3ab11953ef9 Pulling fs layer 23:12:16 91ef9543149d Pulling fs layer 23:12:16 2ec4f59af178 Pulling fs layer 23:12:16 22ebf0e44c85 Waiting 23:12:16 6b11e56702ad Waiting 23:12:16 53d69aa7d3fc Waiting 23:12:16 a3ab11953ef9 Waiting 23:12:16 91ef9543149d Waiting 23:12:16 00b33c871d26 Waiting 23:12:16 8b7e81cd5ef1 Pulling fs layer 23:12:16 2ec4f59af178 Waiting 23:12:16 c52916c1316e Pulling fs layer 23:12:16 8b7e81cd5ef1 Waiting 23:12:16 c52916c1316e Waiting 23:12:16 7a1cb9ad7f75 Pulling fs layer 23:12:16 0a92c7dea7af Pulling fs layer 23:12:16 7a1cb9ad7f75 Waiting 23:12:16 0a92c7dea7af Waiting 23:12:16 ef9c08f83372 Downloading [=> ] 3.001kB/127.4kB 23:12:16 ef9c08f83372 Downloading [==================================================>] 127.4kB/127.4kB 23:12:16 ef9c08f83372 Download complete 23:12:16 84b15477ea97 Downloading [==========================> ] 39.47MB/73.93MB 23:12:16 22ebf0e44c85 Pulling fs layer 23:12:16 00b33c871d26 Pulling fs layer 23:12:16 6b11e56702ad Pulling fs layer 23:12:16 53d69aa7d3fc Pulling fs layer 23:12:16 a3ab11953ef9 Pulling fs layer 23:12:16 91ef9543149d Pulling fs layer 23:12:16 2ec4f59af178 Pulling fs layer 23:12:16 8b7e81cd5ef1 Pulling fs layer 23:12:16 00b33c871d26 Waiting 23:12:16 c52916c1316e Pulling fs layer 23:12:16 22ebf0e44c85 Waiting 23:12:16 d93f69e96600 Pulling fs layer 23:12:16 6b11e56702ad Waiting 23:12:16 53d69aa7d3fc Waiting 23:12:16 8b7e81cd5ef1 Waiting 23:12:16 bbb9d15c45a1 Pulling fs layer 23:12:16 91ef9543149d Waiting 23:12:16 2ec4f59af178 Waiting 23:12:16 c52916c1316e Waiting 23:12:16 d93f69e96600 Waiting 23:12:16 bbb9d15c45a1 Waiting 23:12:16 a3ab11953ef9 Waiting 23:12:16 cc9ccd74b7df Downloading [==================================================>] 1.147kB/1.147kB 23:12:16 cc9ccd74b7df Verifying Checksum 23:12:16 cc9ccd74b7df Download complete 23:12:16 acaa2331ed73 Downloading [> ] 539.6kB/84.46MB 23:12:16 84b15477ea97 Downloading [======================================> ] 56.23MB/73.93MB 23:12:16 acaa2331ed73 Downloading [=====> ] 8.65MB/84.46MB 23:12:16 84b15477ea97 Verifying Checksum 23:12:16 84b15477ea97 Download complete 23:12:16 b3f29df2fabd Downloading [==================================================>] 1.118kB/1.118kB 23:12:16 b3f29df2fabd Verifying Checksum 23:12:16 b3f29df2fabd Download complete 23:12:16 f606e153f1ad Downloading [==================================================>] 293B/293B 23:12:16 f606e153f1ad Download complete 23:12:16 acaa2331ed73 Downloading [===========> ] 18.92MB/84.46MB 23:12:16 8026bfc2fb37 Downloading [=> ] 3.001kB/127kB 23:12:16 8026bfc2fb37 Downloading [==================================================>] 127kB/127kB 23:12:16 8026bfc2fb37 Download complete 23:12:16 67f8ce2807a6 Downloading [==================================================>] 1.321kB/1.321kB 23:12:16 67f8ce2807a6 Verifying Checksum 23:12:16 67f8ce2807a6 Download complete 23:12:16 84b15477ea97 Extracting [> ] 557.1kB/73.93MB 23:12:16 fb4c47760659 Downloading [> ] 539.6kB/98.32MB 23:12:16 6ce1c588fe03 Downloading [> ] 539.6kB/73.93MB 23:12:16 6ce1c588fe03 Downloading [> ] 539.6kB/73.93MB 23:12:16 acaa2331ed73 Downloading [====================> ] 34.6MB/84.46MB 23:12:16 84b15477ea97 Extracting [===> ] 5.571MB/73.93MB 23:12:16 6ce1c588fe03 Downloading [=====> ] 7.568MB/73.93MB 23:12:16 6ce1c588fe03 Downloading [=====> ] 7.568MB/73.93MB 23:12:16 fb4c47760659 Downloading [====> ] 8.109MB/98.32MB 23:12:16 31e352740f53 Already exists 23:12:16 74d0aa5cd96f Pulling fs layer 23:12:16 c9ab9764793f Pulling fs layer 23:12:16 7645ed8cef64 Pulling fs layer 23:12:16 8e028879fd2e Pulling fs layer 23:12:16 fd153c39a15f Pulling fs layer 23:12:16 8e028879fd2e Waiting 23:12:16 74d0aa5cd96f Waiting 23:12:16 c9ab9764793f Waiting 23:12:16 7645ed8cef64 Waiting 23:12:16 acaa2331ed73 Downloading [=============================> ] 50.28MB/84.46MB 23:12:16 84b15477ea97 Extracting [=======> ] 11.7MB/73.93MB 23:12:16 6ce1c588fe03 Downloading [===========> ] 17.3MB/73.93MB 23:12:16 6ce1c588fe03 Downloading [===========> ] 17.3MB/73.93MB 23:12:16 fb4c47760659 Downloading [==========> ] 20MB/98.32MB 23:12:16 acaa2331ed73 Downloading [=========================================> ] 70.29MB/84.46MB 23:12:16 6ce1c588fe03 Downloading [====================> ] 30.28MB/73.93MB 23:12:16 6ce1c588fe03 Downloading [====================> ] 30.28MB/73.93MB 23:12:16 84b15477ea97 Extracting [===========> ] 17.27MB/73.93MB 23:12:16 fb4c47760659 Downloading [===============> ] 30.82MB/98.32MB 23:12:16 acaa2331ed73 Verifying Checksum 23:12:16 acaa2331ed73 Download complete 23:12:16 4cf448219b85 Downloading [==================================================>] 1.296kB/1.296kB 23:12:16 4cf448219b85 Download complete 23:12:16 b358088867e5 Downloading [> ] 539.6kB/180.3MB 23:12:16 6ce1c588fe03 Downloading [=============================> ] 44.33MB/73.93MB 23:12:16 6ce1c588fe03 Downloading [=============================> ] 44.33MB/73.93MB 23:12:16 fb4c47760659 Downloading [======================> ] 44.87MB/98.32MB 23:12:16 84b15477ea97 Extracting [================> ] 24.51MB/73.93MB 23:12:17 b358088867e5 Downloading [=> ] 7.028MB/180.3MB 23:12:17 6ce1c588fe03 Downloading [======================================> ] 56.77MB/73.93MB 23:12:17 6ce1c588fe03 Downloading [======================================> ] 56.77MB/73.93MB 23:12:17 fb4c47760659 Downloading [=============================> ] 57.31MB/98.32MB 23:12:17 84b15477ea97 Extracting [====================> ] 30.64MB/73.93MB 23:12:17 b358088867e5 Downloading [====> ] 14.6MB/180.3MB 23:12:17 6ce1c588fe03 Downloading [===============================================> ] 70.83MB/73.93MB 23:12:17 6ce1c588fe03 Downloading [===============================================> ] 70.83MB/73.93MB 23:12:17 fb4c47760659 Downloading [====================================> ] 71.37MB/98.32MB 23:12:17 84b15477ea97 Extracting [========================> ] 35.65MB/73.93MB 23:12:17 6ce1c588fe03 Verifying Checksum 23:12:17 6ce1c588fe03 Download complete 23:12:17 6ce1c588fe03 Verifying Checksum 23:12:17 6ce1c588fe03 Download complete 23:12:17 89b3be9e3a98 Downloading [=> ] 3.002kB/84.13kB 23:12:17 89b3be9e3a98 Downloading [==================================================>] 84.13kB/84.13kB 23:12:17 89b3be9e3a98 Verifying Checksum 23:12:17 89b3be9e3a98 Download complete 23:12:17 9ddffcaebea1 Downloading [==================================================>] 92B/92B 23:12:17 9ddffcaebea1 Verifying Checksum 23:12:17 9ddffcaebea1 Download complete 23:12:17 b358088867e5 Downloading [========> ] 29.74MB/180.3MB 23:12:17 fb4c47760659 Downloading [============================================> ] 87.05MB/98.32MB 23:12:17 6a634b17ba79 Downloading [==================================================>] 90B/90B 23:12:17 6a634b17ba79 Verifying Checksum 23:12:17 6a634b17ba79 Download complete 23:12:17 84b15477ea97 Extracting [==========================> ] 39.55MB/73.93MB 23:12:17 6ce1c588fe03 Extracting [> ] 557.1kB/73.93MB 23:12:17 6ce1c588fe03 Extracting [> ] 557.1kB/73.93MB 23:12:17 c796e22f7138 Downloading [==================================================>] 301B/301B 23:12:17 c796e22f7138 Download complete 23:12:17 fb4c47760659 Verifying Checksum 23:12:17 fb4c47760659 Download complete 23:12:17 63558e6a3d29 Downloading [> ] 539.6kB/246.3MB 23:12:17 b358088867e5 Downloading [============> ] 45.96MB/180.3MB 23:12:17 10ac4908093d Downloading [> ] 310.2kB/30.43MB 23:12:17 84b15477ea97 Extracting [==============================> ] 44.56MB/73.93MB 23:12:17 6ce1c588fe03 Extracting [===> ] 4.456MB/73.93MB 23:12:17 6ce1c588fe03 Extracting [===> ] 4.456MB/73.93MB 23:12:17 63558e6a3d29 Downloading [=> ] 7.568MB/246.3MB 23:12:17 b358088867e5 Downloading [=================> ] 63.26MB/180.3MB 23:12:17 10ac4908093d Downloading [=====> ] 3.112MB/30.43MB 23:12:17 84b15477ea97 Extracting [=================================> ] 49.02MB/73.93MB 23:12:17 6ce1c588fe03 Extracting [======> ] 8.913MB/73.93MB 23:12:17 6ce1c588fe03 Extracting [======> ] 8.913MB/73.93MB 23:12:17 63558e6a3d29 Downloading [===> ] 18.38MB/246.3MB 23:12:17 b358088867e5 Downloading [======================> ] 79.48MB/180.3MB 23:12:17 10ac4908093d Downloading [===================> ] 12.14MB/30.43MB 23:12:17 6ce1c588fe03 Extracting [========> ] 12.26MB/73.93MB 23:12:17 6ce1c588fe03 Extracting [========> ] 12.26MB/73.93MB 23:12:17 84b15477ea97 Extracting [====================================> ] 54.59MB/73.93MB 23:12:17 63558e6a3d29 Downloading [=====> ] 29.2MB/246.3MB 23:12:17 b358088867e5 Downloading [=========================> ] 91.91MB/180.3MB 23:12:17 10ac4908093d Downloading [=================================> ] 20.23MB/30.43MB 23:12:17 6ce1c588fe03 Extracting [==========> ] 16.15MB/73.93MB 23:12:17 6ce1c588fe03 Extracting [==========> ] 16.15MB/73.93MB 23:12:17 63558e6a3d29 Downloading [========> ] 42.71MB/246.3MB 23:12:17 84b15477ea97 Extracting [=======================================> ] 59.05MB/73.93MB 23:12:17 b358088867e5 Downloading [==============================> ] 108.7MB/180.3MB 23:12:17 10ac4908093d Verifying Checksum 23:12:17 10ac4908093d Download complete 23:12:17 44779101e748 Downloading [==================================================>] 1.744kB/1.744kB 23:12:17 44779101e748 Verifying Checksum 23:12:17 44779101e748 Download complete 23:12:17 a721db3e3f3d Downloading [> ] 64.45kB/5.526MB 23:12:17 6ce1c588fe03 Extracting [==============> ] 21.17MB/73.93MB 23:12:17 6ce1c588fe03 Extracting [==============> ] 21.17MB/73.93MB 23:12:17 63558e6a3d29 Downloading [============> ] 59.47MB/246.3MB 23:12:17 84b15477ea97 Extracting [==========================================> ] 63.5MB/73.93MB 23:12:17 b358088867e5 Downloading [==================================> ] 123.8MB/180.3MB 23:12:17 10ac4908093d Extracting [> ] 327.7kB/30.43MB 23:12:17 a721db3e3f3d Downloading [===========================> ] 3.079MB/5.526MB 23:12:17 6ce1c588fe03 Extracting [=================> ] 25.62MB/73.93MB 23:12:17 6ce1c588fe03 Extracting [=================> ] 25.62MB/73.93MB 23:12:17 63558e6a3d29 Downloading [===============> ] 75.15MB/246.3MB 23:12:17 a721db3e3f3d Verifying Checksum 23:12:17 a721db3e3f3d Download complete 23:12:17 84b15477ea97 Extracting [=============================================> ] 67.4MB/73.93MB 23:12:17 b358088867e5 Downloading [======================================> ] 140MB/180.3MB 23:12:17 1850a929b84a Downloading [==================================================>] 149B/149B 23:12:17 1850a929b84a Download complete 23:12:17 10ac4908093d Extracting [====> ] 2.949MB/30.43MB 23:12:17 397a918c7da3 Downloading [==================================================>] 327B/327B 23:12:17 397a918c7da3 Verifying Checksum 23:12:17 397a918c7da3 Download complete 23:12:17 806be17e856d Downloading [> ] 539.6kB/89.72MB 23:12:17 6ce1c588fe03 Extracting [====================> ] 30.08MB/73.93MB 23:12:17 6ce1c588fe03 Extracting [====================> ] 30.08MB/73.93MB 23:12:18 63558e6a3d29 Downloading [==================> ] 91.91MB/246.3MB 23:12:18 84b15477ea97 Extracting [================================================> ] 72.42MB/73.93MB 23:12:18 b358088867e5 Downloading [===========================================> ] 157.3MB/180.3MB 23:12:18 10ac4908093d Extracting [==========> ] 6.226MB/30.43MB 23:12:18 84b15477ea97 Extracting [==================================================>] 73.93MB/73.93MB 23:12:18 6ce1c588fe03 Extracting [======================> ] 33.42MB/73.93MB 23:12:18 6ce1c588fe03 Extracting [======================> ] 33.42MB/73.93MB 23:12:18 806be17e856d Downloading [===> ] 6.487MB/89.72MB 23:12:18 b358088867e5 Downloading [================================================> ] 176.3MB/180.3MB 23:12:18 63558e6a3d29 Downloading [=====================> ] 104.9MB/246.3MB 23:12:18 10ac4908093d Extracting [=============> ] 8.52MB/30.43MB 23:12:18 b358088867e5 Verifying Checksum 23:12:18 b358088867e5 Download complete 23:12:18 6ce1c588fe03 Extracting [========================> ] 35.65MB/73.93MB 23:12:18 6ce1c588fe03 Extracting [========================> ] 35.65MB/73.93MB 23:12:18 634de6c90876 Downloading [===========================================> ] 3.011kB/3.49kB 23:12:18 63558e6a3d29 Downloading [=======================> ] 116.8MB/246.3MB 23:12:18 806be17e856d Downloading [=====> ] 9.731MB/89.72MB 23:12:18 634de6c90876 Download complete 23:12:18 10ac4908093d Extracting [=================> ] 10.49MB/30.43MB 23:12:18 cd00854cfb1a Downloading [=====================> ] 3.011kB/6.971kB 23:12:18 cd00854cfb1a Downloading [==================================================>] 6.971kB/6.971kB 23:12:18 cd00854cfb1a Verifying Checksum 23:12:18 cd00854cfb1a Download complete 23:12:18 9fa9226be034 Downloading [> ] 15.3kB/783kB 23:12:18 6ce1c588fe03 Extracting [===========================> ] 40.11MB/73.93MB 23:12:18 6ce1c588fe03 Extracting [===========================> ] 40.11MB/73.93MB 23:12:18 9fa9226be034 Downloading [==================================================>] 783kB/783kB 23:12:18 9fa9226be034 Download complete 23:12:18 9fa9226be034 Extracting [==> ] 32.77kB/783kB 23:12:18 b358088867e5 Extracting [> ] 557.1kB/180.3MB 23:12:18 84b15477ea97 Pull complete 23:12:18 63558e6a3d29 Downloading [==========================> ] 132.5MB/246.3MB 23:12:18 806be17e856d Downloading [===========> ] 21.09MB/89.72MB 23:12:18 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 23:12:18 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 23:12:18 1617e25568b2 Verifying Checksum 23:12:18 1617e25568b2 Download complete 23:12:18 10ac4908093d Extracting [=======================> ] 14.42MB/30.43MB 23:12:18 1b30b2d9318a Downloading [> ] 539.6kB/55.45MB 23:12:18 6ce1c588fe03 Extracting [=============================> ] 43.45MB/73.93MB 23:12:18 6ce1c588fe03 Extracting [=============================> ] 43.45MB/73.93MB 23:12:18 b358088867e5 Extracting [=> ] 3.899MB/180.3MB 23:12:18 63558e6a3d29 Downloading [=============================> ] 147.1MB/246.3MB 23:12:18 806be17e856d Downloading [===================> ] 34.6MB/89.72MB 23:12:18 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 23:12:18 9fa9226be034 Extracting [==================================================>] 783kB/783kB 23:12:18 9fa9226be034 Extracting [==================================================>] 783kB/783kB 23:12:18 d09e8e07fc25 Extracting [> ] 360.4kB/32.98MB 23:12:18 10ac4908093d Extracting [==============================> ] 18.35MB/30.43MB 23:12:18 1b30b2d9318a Downloading [===> ] 4.324MB/55.45MB 23:12:18 6ce1c588fe03 Extracting [==============================> ] 45.68MB/73.93MB 23:12:18 6ce1c588fe03 Extracting [==============================> ] 45.68MB/73.93MB 23:12:18 806be17e856d Downloading [==========================> ] 48.12MB/89.72MB 23:12:18 9fa9226be034 Pull complete 23:12:18 63558e6a3d29 Downloading [=================================> ] 163.3MB/246.3MB 23:12:18 b358088867e5 Extracting [==> ] 8.356MB/180.3MB 23:12:18 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 23:12:18 d09e8e07fc25 Extracting [===> ] 2.523MB/32.98MB 23:12:18 10ac4908093d Extracting [=================================> ] 20.32MB/30.43MB 23:12:18 1b30b2d9318a Downloading [========> ] 9.731MB/55.45MB 23:12:18 6ce1c588fe03 Extracting [================================> ] 47.35MB/73.93MB 23:12:18 6ce1c588fe03 Extracting [================================> ] 47.35MB/73.93MB 23:12:18 806be17e856d Downloading [==================================> ] 61.09MB/89.72MB 23:12:18 63558e6a3d29 Downloading [===================================> ] 176.8MB/246.3MB 23:12:18 d09e8e07fc25 Extracting [=======> ] 5.046MB/32.98MB 23:12:18 b358088867e5 Extracting [====> ] 16.71MB/180.3MB 23:12:18 10ac4908093d Extracting [=======================================> ] 23.92MB/30.43MB 23:12:18 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 23:12:18 1b30b2d9318a Downloading [==================> ] 20MB/55.45MB 23:12:18 6ce1c588fe03 Extracting [=================================> ] 49.02MB/73.93MB 23:12:18 6ce1c588fe03 Extracting [=================================> ] 49.02MB/73.93MB 23:12:18 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 23:12:18 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 23:12:18 806be17e856d Downloading [==========================================> ] 75.69MB/89.72MB 23:12:18 63558e6a3d29 Downloading [======================================> ] 188.2MB/246.3MB 23:12:18 b358088867e5 Extracting [=======> ] 25.62MB/180.3MB 23:12:18 d09e8e07fc25 Extracting [==========> ] 7.209MB/32.98MB 23:12:18 1b30b2d9318a Downloading [==========================> ] 29.2MB/55.45MB 23:12:18 6ce1c588fe03 Extracting [===================================> ] 51.81MB/73.93MB 23:12:18 6ce1c588fe03 Extracting [===================================> ] 51.81MB/73.93MB 23:12:18 10ac4908093d Extracting [===========================================> ] 26.54MB/30.43MB 23:12:18 806be17e856d Downloading [==============================================> ] 83.26MB/89.72MB 23:12:18 63558e6a3d29 Downloading [=========================================> ] 203.3MB/246.3MB 23:12:18 b358088867e5 Extracting [=========> ] 32.87MB/180.3MB 23:12:18 d09e8e07fc25 Extracting [===============> ] 10.09MB/32.98MB 23:12:18 1617e25568b2 Pull complete 23:12:18 806be17e856d Verifying Checksum 23:12:18 806be17e856d Download complete 23:12:18 1b30b2d9318a Downloading [===================================> ] 39.47MB/55.45MB 23:12:18 f6d077cd6629 Downloading [> ] 506.8kB/50.34MB 23:12:18 63558e6a3d29 Downloading [============================================> ] 219.5MB/246.3MB 23:12:18 b358088867e5 Extracting [===========> ] 42.34MB/180.3MB 23:12:18 d09e8e07fc25 Extracting [====================> ] 13.7MB/32.98MB 23:12:18 6ce1c588fe03 Extracting [====================================> ] 54.59MB/73.93MB 23:12:18 6ce1c588fe03 Extracting [====================================> ] 54.59MB/73.93MB 23:12:18 10ac4908093d Extracting [===============================================> ] 29.16MB/30.43MB 23:12:19 1b30b2d9318a Downloading [===============================================> ] 52.98MB/55.45MB 23:12:19 1b30b2d9318a Verifying Checksum 23:12:19 1b30b2d9318a Download complete 23:12:19 f6d077cd6629 Downloading [==========> ] 10.16MB/50.34MB 23:12:19 d6c6c26dc98a Downloading [==================================================>] 605B/605B 23:12:19 d6c6c26dc98a Verifying Checksum 23:12:19 d6c6c26dc98a Download complete 23:12:19 63558e6a3d29 Downloading [================================================> ] 237.4MB/246.3MB 23:12:19 60290e82ca2c Downloading [==================================================>] 2.679kB/2.679kB 23:12:19 60290e82ca2c Verifying Checksum 23:12:19 60290e82ca2c Download complete 23:12:19 b358088867e5 Extracting [==============> ] 52.92MB/180.3MB 23:12:19 d09e8e07fc25 Extracting [=========================> ] 16.58MB/32.98MB 23:12:19 10ac4908093d Extracting [==================================================>] 30.43MB/30.43MB 23:12:19 63558e6a3d29 Verifying Checksum 23:12:19 63558e6a3d29 Download complete 23:12:19 6ce1c588fe03 Extracting [=======================================> ] 58.49MB/73.93MB 23:12:19 6ce1c588fe03 Extracting [=======================================> ] 58.49MB/73.93MB 23:12:19 78605ea207be Downloading [================================================> ] 3.011kB/3.089kB 23:12:19 78605ea207be Downloading [==================================================>] 3.089kB/3.089kB 23:12:19 78605ea207be Verifying Checksum 23:12:19 78605ea207be Download complete 23:12:19 869e11012e0e Downloading [=====================================> ] 3.011kB/4.023kB 23:12:19 869e11012e0e Downloading [==================================================>] 4.023kB/4.023kB 23:12:19 869e11012e0e Verifying Checksum 23:12:19 869e11012e0e Download complete 23:12:19 c4426427fcc3 Downloading [==================================================>] 1.44kB/1.44kB 23:12:19 c4426427fcc3 Verifying Checksum 23:12:19 c4426427fcc3 Download complete 23:12:19 f6d077cd6629 Downloading [===============> ] 15.74MB/50.34MB 23:12:19 d247d9811eae Downloading [=> ] 3.009kB/139.8kB 23:12:19 d247d9811eae Downloading [==================================================>] 139.8kB/139.8kB 23:12:19 d247d9811eae Download complete 23:12:19 f1fb904ca1b9 Downloading [==================================================>] 100B/100B 23:12:19 f1fb904ca1b9 Verifying Checksum 23:12:19 f1fb904ca1b9 Download complete 23:12:19 1b30b2d9318a Extracting [> ] 557.1kB/55.45MB 23:12:19 1e12dd793eba Downloading [==================================================>] 721B/721B 23:12:19 1e12dd793eba Verifying Checksum 23:12:19 1e12dd793eba Download complete 23:12:19 b358088867e5 Extracting [=================> ] 62.95MB/180.3MB 23:12:19 4abcf2066143 Downloading [> ] 48.06kB/3.409MB 23:12:19 d09e8e07fc25 Extracting [==============================> ] 19.82MB/32.98MB 23:12:19 17c7b7b51500 Downloading [==================================================>] 140B/140B 23:12:19 17c7b7b51500 Verifying Checksum 23:12:19 17c7b7b51500 Download complete 23:12:19 3a874871ebf5 Downloading [> ] 31.68kB/3.162MB 23:12:19 6ce1c588fe03 Extracting [==========================================> ] 62.95MB/73.93MB 23:12:19 6ce1c588fe03 Extracting [==========================================> ] 62.95MB/73.93MB 23:12:19 b358088867e5 Extracting [===================> ] 68.52MB/180.3MB 23:12:19 3a874871ebf5 Downloading [=========> ] 621.5kB/3.162MB 23:12:19 f6d077cd6629 Downloading [============================> ] 28.44MB/50.34MB 23:12:19 4abcf2066143 Downloading [===============================> ] 2.162MB/3.409MB 23:12:19 10ac4908093d Pull complete 23:12:19 1b30b2d9318a Extracting [===> ] 3.342MB/55.45MB 23:12:19 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 23:12:19 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 23:12:19 4abcf2066143 Verifying Checksum 23:12:19 4abcf2066143 Download complete 23:12:19 6ce1c588fe03 Extracting [============================================> ] 65.18MB/73.93MB 23:12:19 6ce1c588fe03 Extracting [============================================> ] 65.18MB/73.93MB 23:12:19 4abcf2066143 Extracting [> ] 65.54kB/3.409MB 23:12:19 d09e8e07fc25 Extracting [================================> ] 21.27MB/32.98MB 23:12:19 4d8b5d34b1ef Downloading [> ] 48.06kB/4.333MB 23:12:19 3a874871ebf5 Verifying Checksum 23:12:19 3a874871ebf5 Download complete 23:12:19 ea2f71d64768 Downloading [===> ] 3.01kB/46.31kB 23:12:19 ea2f71d64768 Download complete 23:12:19 2d8d8a45d8d1 Downloading [======> ] 3.01kB/22.97kB 23:12:19 2d8d8a45d8d1 Download complete 23:12:19 b358088867e5 Extracting [=====================> ] 76.32MB/180.3MB 23:12:19 f6d077cd6629 Downloading [=========================================> ] 41.65MB/50.34MB 23:12:19 4d8b5d34b1ef Verifying Checksum 23:12:19 4d8b5d34b1ef Download complete 23:12:19 1b30b2d9318a Extracting [====> ] 5.014MB/55.45MB 23:12:19 6ce1c588fe03 Extracting [==============================================> ] 69.07MB/73.93MB 23:12:19 6ce1c588fe03 Extracting [==============================================> ] 69.07MB/73.93MB 23:12:19 ab976f46af30 Downloading [> ] 539.6kB/60.66MB 23:12:19 d09e8e07fc25 Extracting [=================================> ] 22.35MB/32.98MB 23:12:19 8eef243a7847 Downloading [> ] 506.8kB/49.06MB 23:12:19 4abcf2066143 Extracting [=====> ] 393.2kB/3.409MB 23:12:19 f6d077cd6629 Verifying Checksum 23:12:19 f6d077cd6629 Download complete 23:12:19 5d8ca4014ed0 Downloading [============> ] 3.01kB/11.92kB 23:12:19 5d8ca4014ed0 Downloading [==================================================>] 11.92kB/11.92kB 23:12:19 5d8ca4014ed0 Verifying Checksum 23:12:19 5d8ca4014ed0 Download complete 23:12:19 b358088867e5 Extracting [======================> ] 81.89MB/180.3MB 23:12:19 2a0008f5c37f Downloading [==================================================>] 1.228kB/1.228kB 23:12:19 2a0008f5c37f Verifying Checksum 23:12:19 2a0008f5c37f Download complete 23:12:19 1b30b2d9318a Extracting [======> ] 7.242MB/55.45MB 23:12:19 ab976f46af30 Downloading [========> ] 10.27MB/60.66MB 23:12:19 8eef243a7847 Downloading [==========> ] 9.846MB/49.06MB 23:12:19 4abcf2066143 Extracting [================================================> ] 3.277MB/3.409MB 23:12:19 1b30b2d9318a Extracting [========> ] 8.913MB/55.45MB 23:12:19 8eef243a7847 Downloading [===============> ] 14.76MB/49.06MB 23:12:19 b358088867e5 Extracting [=======================> ] 84.12MB/180.3MB 23:12:19 ab976f46af30 Downloading [==============> ] 17.3MB/60.66MB 23:12:19 4abcf2066143 Extracting [==================================================>] 3.409MB/3.409MB 23:12:19 6ce1c588fe03 Extracting [================================================> ] 72.42MB/73.93MB 23:12:19 6ce1c588fe03 Extracting [================================================> ] 72.42MB/73.93MB 23:12:19 1b30b2d9318a Extracting [=============> ] 14.48MB/55.45MB 23:12:19 22ebf0e44c85 Downloading [> ] 376.1kB/37.02MB 23:12:19 22ebf0e44c85 Downloading [> ] 376.1kB/37.02MB 23:12:19 8eef243a7847 Downloading [===========================> ] 27.05MB/49.06MB 23:12:19 ab976f46af30 Downloading [==========================> ] 32.44MB/60.66MB 23:12:19 22ebf0e44c85 Downloading [===> ] 2.26MB/37.02MB 23:12:19 22ebf0e44c85 Downloading [===> ] 2.26MB/37.02MB 23:12:19 44779101e748 Pull complete 23:12:19 b358088867e5 Extracting [========================> ] 88.01MB/180.3MB 23:12:19 1b30b2d9318a Extracting [==================> ] 20.61MB/55.45MB 23:12:19 d09e8e07fc25 Extracting [====================================> ] 23.79MB/32.98MB 23:12:19 8eef243a7847 Downloading [================================> ] 31.96MB/49.06MB 23:12:19 ab976f46af30 Downloading [==============================> ] 37.31MB/60.66MB 23:12:20 22ebf0e44c85 Downloading [=========> ] 6.765MB/37.02MB 23:12:20 22ebf0e44c85 Downloading [=========> ] 6.765MB/37.02MB 23:12:20 6ce1c588fe03 Extracting [==================================================>] 73.93MB/73.93MB 23:12:20 6ce1c588fe03 Extracting [==================================================>] 73.93MB/73.93MB 23:12:20 1b30b2d9318a Extracting [====================> ] 22.84MB/55.45MB 23:12:20 d09e8e07fc25 Extracting [=====================================> ] 24.87MB/32.98MB 23:12:20 b358088867e5 Extracting [=========================> ] 90.24MB/180.3MB 23:12:20 8eef243a7847 Downloading [=======================================> ] 38.85MB/49.06MB 23:12:20 ab976f46af30 Downloading [======================================> ] 46.5MB/60.66MB 23:12:20 22ebf0e44c85 Downloading [============> ] 9.403MB/37.02MB 23:12:20 22ebf0e44c85 Downloading [============> ] 9.403MB/37.02MB 23:12:20 1b30b2d9318a Extracting [=========================> ] 28.41MB/55.45MB 23:12:20 b358088867e5 Extracting [=========================> ] 92.47MB/180.3MB 23:12:20 d09e8e07fc25 Extracting [========================================> ] 26.67MB/32.98MB 23:12:20 a721db3e3f3d Extracting [> ] 65.54kB/5.526MB 23:12:20 8eef243a7847 Downloading [=================================================> ] 48.68MB/49.06MB 23:12:20 22ebf0e44c85 Downloading [=======================> ] 17.69MB/37.02MB 23:12:20 22ebf0e44c85 Downloading [=======================> ] 17.69MB/37.02MB 23:12:20 8eef243a7847 Verifying Checksum 23:12:20 ab976f46af30 Downloading [================================================> ] 58.39MB/60.66MB 23:12:20 8eef243a7847 Download complete 23:12:20 ab976f46af30 Verifying Checksum 23:12:20 ab976f46af30 Download complete 23:12:20 1b30b2d9318a Extracting [================================> ] 35.65MB/55.45MB 23:12:20 b358088867e5 Extracting [==========================> ] 95.81MB/180.3MB 23:12:20 d09e8e07fc25 Extracting [===========================================> ] 28.48MB/32.98MB 23:12:20 22ebf0e44c85 Downloading [========================================> ] 29.76MB/37.02MB 23:12:20 22ebf0e44c85 Downloading [========================================> ] 29.76MB/37.02MB 23:12:20 1b30b2d9318a Extracting [===========================================> ] 48.46MB/55.45MB 23:12:20 6b11e56702ad Downloading [> ] 77.31kB/7.707MB 23:12:20 6b11e56702ad Downloading [> ] 77.31kB/7.707MB 23:12:20 4abcf2066143 Pull complete 23:12:20 17c7b7b51500 Extracting [==================================================>] 140B/140B 23:12:20 17c7b7b51500 Extracting [==================================================>] 140B/140B 23:12:20 22ebf0e44c85 Verifying Checksum 23:12:20 22ebf0e44c85 Download complete 23:12:20 22ebf0e44c85 Verifying Checksum 23:12:20 22ebf0e44c85 Download complete 23:12:20 00b33c871d26 Downloading [> ] 527.6kB/253.3MB 23:12:20 00b33c871d26 Downloading [> ] 527.6kB/253.3MB 23:12:20 a721db3e3f3d Extracting [==> ] 262.1kB/5.526MB 23:12:20 d09e8e07fc25 Extracting [==============================================> ] 30.64MB/32.98MB 23:12:20 b358088867e5 Extracting [===========================> ] 100.3MB/180.3MB 23:12:20 6ce1c588fe03 Pull complete 23:12:20 6ce1c588fe03 Pull complete 23:12:20 1b30b2d9318a Extracting [=================================================> ] 54.59MB/55.45MB 23:12:20 6b11e56702ad Downloading [====================================> ] 5.648MB/7.707MB 23:12:20 6b11e56702ad Downloading [====================================> ] 5.648MB/7.707MB 23:12:20 f606e153f1ad Extracting [==================================================>] 293B/293B 23:12:20 36524972d691 Extracting [==================================================>] 293B/293B 23:12:20 f606e153f1ad Extracting [==================================================>] 293B/293B 23:12:20 36524972d691 Extracting [==================================================>] 293B/293B 23:12:20 6b11e56702ad Download complete 23:12:20 6b11e56702ad Download complete 23:12:20 a721db3e3f3d Extracting [=================> ] 1.901MB/5.526MB 23:12:20 00b33c871d26 Downloading [> ] 4.815MB/253.3MB 23:12:20 00b33c871d26 Downloading [> ] 4.815MB/253.3MB 23:12:20 d09e8e07fc25 Extracting [================================================> ] 32.08MB/32.98MB 23:12:20 d09e8e07fc25 Extracting [==================================================>] 32.98MB/32.98MB 23:12:20 b358088867e5 Extracting [============================> ] 102.5MB/180.3MB 23:12:20 53d69aa7d3fc Downloading [=> ] 720B/19.96kB 23:12:20 53d69aa7d3fc Downloading [=> ] 720B/19.96kB 23:12:20 53d69aa7d3fc Downloading [==================================================>] 19.96kB/19.96kB 23:12:20 53d69aa7d3fc Verifying Checksum 23:12:20 53d69aa7d3fc Download complete 23:12:20 53d69aa7d3fc Verifying Checksum 23:12:20 53d69aa7d3fc Download complete 23:12:20 22ebf0e44c85 Extracting [> ] 393.2kB/37.02MB 23:12:20 22ebf0e44c85 Extracting [> ] 393.2kB/37.02MB 23:12:20 a721db3e3f3d Extracting [====================================> ] 4.063MB/5.526MB 23:12:20 00b33c871d26 Downloading [==> ] 13.89MB/253.3MB 23:12:20 00b33c871d26 Downloading [==> ] 13.89MB/253.3MB 23:12:20 b358088867e5 Extracting [============================> ] 103.6MB/180.3MB 23:12:20 1b30b2d9318a Extracting [=================================================> ] 55.15MB/55.45MB 23:12:20 91ef9543149d Downloading [================================> ] 719B/1.101kB 23:12:20 91ef9543149d Downloading [==================================================>] 1.101kB/1.101kB 23:12:20 91ef9543149d Verifying Checksum 23:12:20 91ef9543149d Download complete 23:12:20 91ef9543149d Downloading [================================> ] 719B/1.101kB 23:12:20 91ef9543149d Downloading [==================================================>] 1.101kB/1.101kB 23:12:20 91ef9543149d Download complete 23:12:20 a3ab11953ef9 Downloading [> ] 409.6kB/39.52MB 23:12:20 a3ab11953ef9 Downloading [> ] 409.6kB/39.52MB 23:12:20 17c7b7b51500 Pull complete 23:12:20 36524972d691 Pull complete 23:12:20 f606e153f1ad Pull complete 23:12:20 d09e8e07fc25 Pull complete 23:12:20 22ebf0e44c85 Extracting [==> ] 1.573MB/37.02MB 23:12:20 22ebf0e44c85 Extracting [==> ] 1.573MB/37.02MB 23:12:20 3a874871ebf5 Extracting [> ] 32.77kB/3.162MB 23:12:20 a721db3e3f3d Extracting [========================================> ] 4.456MB/5.526MB 23:12:20 ef9c08f83372 Extracting [============> ] 32.77kB/127.4kB 23:12:20 e578a0c624a9 Extracting [==================================================>] 1.077kB/1.077kB 23:12:20 ef9c08f83372 Extracting [==================================================>] 127.4kB/127.4kB 23:12:20 8026bfc2fb37 Extracting [============> ] 32.77kB/127kB 23:12:20 e578a0c624a9 Extracting [==================================================>] 1.077kB/1.077kB 23:12:20 1b30b2d9318a Extracting [==================================================>] 55.45MB/55.45MB 23:12:20 00b33c871d26 Downloading [====> ] 21.92MB/253.3MB 23:12:20 00b33c871d26 Downloading [====> ] 21.92MB/253.3MB 23:12:20 b358088867e5 Extracting [=============================> ] 106.4MB/180.3MB 23:12:20 8026bfc2fb37 Extracting [==================================================>] 127kB/127kB 23:12:20 8026bfc2fb37 Extracting [==================================================>] 127kB/127kB 23:12:20 a3ab11953ef9 Downloading [================> ] 13.4MB/39.52MB 23:12:20 a3ab11953ef9 Downloading [================> ] 13.4MB/39.52MB 23:12:20 2ec4f59af178 Downloading [========================================> ] 721B/881B 23:12:20 2ec4f59af178 Downloading [==================================================>] 881B/881B 23:12:20 2ec4f59af178 Downloading [========================================> ] 721B/881B 23:12:20 2ec4f59af178 Downloading [==================================================>] 881B/881B 23:12:20 2ec4f59af178 Verifying Checksum 23:12:20 2ec4f59af178 Verifying Checksum 23:12:20 2ec4f59af178 Download complete 23:12:20 2ec4f59af178 Download complete 23:12:20 22ebf0e44c85 Extracting [=====> ] 4.325MB/37.02MB 23:12:20 22ebf0e44c85 Extracting [=====> ] 4.325MB/37.02MB 23:12:20 a721db3e3f3d Extracting [==========================================> ] 4.719MB/5.526MB 23:12:20 3a874871ebf5 Extracting [=====> ] 327.7kB/3.162MB 23:12:20 00b33c871d26 Downloading [=======> ] 35.9MB/253.3MB 23:12:20 00b33c871d26 Downloading [=======> ] 35.9MB/253.3MB 23:12:20 b358088867e5 Extracting [==============================> ] 109.7MB/180.3MB 23:12:20 a3ab11953ef9 Downloading [===================================> ] 28.44MB/39.52MB 23:12:20 a3ab11953ef9 Downloading [===================================> ] 28.44MB/39.52MB 23:12:20 1b30b2d9318a Pull complete 23:12:20 ef9c08f83372 Pull complete 23:12:20 cc9ccd74b7df Extracting [==================================================>] 1.147kB/1.147kB 23:12:20 22ebf0e44c85 Extracting [=======> ] 5.898MB/37.02MB 23:12:20 22ebf0e44c85 Extracting [=======> ] 5.898MB/37.02MB 23:12:20 cc9ccd74b7df Extracting [==================================================>] 1.147kB/1.147kB 23:12:20 a721db3e3f3d Extracting [============================================> ] 4.915MB/5.526MB 23:12:20 8026bfc2fb37 Pull complete 23:12:20 67f8ce2807a6 Extracting [==================================================>] 1.321kB/1.321kB 23:12:20 67f8ce2807a6 Extracting [==================================================>] 1.321kB/1.321kB 23:12:20 3a874871ebf5 Extracting [=====================> ] 1.376MB/3.162MB 23:12:20 00b33c871d26 Downloading [=========> ] 49.86MB/253.3MB 23:12:20 00b33c871d26 Downloading [=========> ] 49.86MB/253.3MB 23:12:20 8b7e81cd5ef1 Downloading [==================================================>] 131B/131B 23:12:20 8b7e81cd5ef1 Downloading [==================================================>] 131B/131B 23:12:20 8b7e81cd5ef1 Verifying Checksum 23:12:20 8b7e81cd5ef1 Download complete 23:12:20 8b7e81cd5ef1 Verifying Checksum 23:12:20 8b7e81cd5ef1 Download complete 23:12:21 a721db3e3f3d Extracting [==================================================>] 5.526MB/5.526MB 23:12:21 a3ab11953ef9 Verifying Checksum 23:12:21 a3ab11953ef9 Download complete 23:12:21 a3ab11953ef9 Verifying Checksum 23:12:21 a3ab11953ef9 Download complete 23:12:21 00b33c871d26 Downloading [===========> ] 56.33MB/253.3MB 23:12:21 00b33c871d26 Downloading [===========> ] 56.33MB/253.3MB 23:12:21 b358088867e5 Extracting [===============================> ] 112MB/180.3MB 23:12:21 3a874871ebf5 Extracting [===============================================> ] 3.015MB/3.162MB 23:12:21 f6d077cd6629 Extracting [> ] 524.3kB/50.34MB 23:12:21 e578a0c624a9 Pull complete 23:12:21 49d5cc175bf3 Extracting [==================================================>] 5.326kB/5.326kB 23:12:21 49d5cc175bf3 Extracting [==================================================>] 5.326kB/5.326kB 23:12:21 22ebf0e44c85 Extracting [==========> ] 7.471MB/37.02MB 23:12:21 22ebf0e44c85 Extracting [==========> ] 7.471MB/37.02MB 23:12:21 a721db3e3f3d Pull complete 23:12:21 1850a929b84a Extracting [==================================================>] 149B/149B 23:12:21 1850a929b84a Extracting [==================================================>] 149B/149B 23:12:21 67f8ce2807a6 Pull complete 23:12:21 c52916c1316e Downloading [==================================================>] 171B/171B 23:12:21 c52916c1316e Downloading [==================================================>] 171B/171B 23:12:21 c52916c1316e Verifying Checksum 23:12:21 c52916c1316e Download complete 23:12:21 c52916c1316e Verifying Checksum 23:12:21 c52916c1316e Download complete 23:12:21 cc9ccd74b7df Pull complete 23:12:21 00b33c871d26 Downloading [============> ] 64.37MB/253.3MB 23:12:21 00b33c871d26 Downloading [============> ] 64.37MB/253.3MB 23:12:21 3a874871ebf5 Extracting [=================================================> ] 3.146MB/3.162MB 23:12:21 f6d077cd6629 Extracting [==> ] 2.621MB/50.34MB 23:12:21 b358088867e5 Extracting [===============================> ] 114.2MB/180.3MB 23:12:21 3a874871ebf5 Extracting [==================================================>] 3.162MB/3.162MB 23:12:21 7a1cb9ad7f75 Downloading [> ] 535.8kB/115.2MB 23:12:21 22ebf0e44c85 Extracting [=============> ] 9.83MB/37.02MB 23:12:21 22ebf0e44c85 Extracting [=============> ] 9.83MB/37.02MB 23:12:21 fb4c47760659 Extracting [> ] 557.1kB/98.32MB 23:12:21 00b33c871d26 Downloading [===============> ] 78.29MB/253.3MB 23:12:21 00b33c871d26 Downloading [===============> ] 78.29MB/253.3MB 23:12:21 0a92c7dea7af Downloading [==========> ] 720B/3.449kB 23:12:21 0a92c7dea7af Downloading [==================================================>] 3.449kB/3.449kB 23:12:21 0a92c7dea7af Verifying Checksum 23:12:21 0a92c7dea7af Download complete 23:12:21 b358088867e5 Extracting [================================> ] 117.5MB/180.3MB 23:12:21 7a1cb9ad7f75 Downloading [=====> ] 12.94MB/115.2MB 23:12:21 f6d077cd6629 Extracting [=====> ] 5.243MB/50.34MB 23:12:21 acaa2331ed73 Extracting [> ] 557.1kB/84.46MB 23:12:21 1850a929b84a Pull complete 23:12:21 fb4c47760659 Extracting [===> ] 7.799MB/98.32MB 23:12:21 3a874871ebf5 Pull complete 23:12:21 49d5cc175bf3 Pull complete 23:12:21 397a918c7da3 Extracting [==================================================>] 327B/327B 23:12:21 397a918c7da3 Extracting [==================================================>] 327B/327B 23:12:21 22ebf0e44c85 Extracting [===============> ] 11.4MB/37.02MB 23:12:21 22ebf0e44c85 Extracting [===============> ] 11.4MB/37.02MB 23:12:21 73f2dcbe3502 Extracting [==================================================>] 5.316kB/5.316kB 23:12:21 73f2dcbe3502 Extracting [==================================================>] 5.316kB/5.316kB 23:12:21 4d8b5d34b1ef Extracting [> ] 65.54kB/4.333MB 23:12:21 00b33c871d26 Downloading [=================> ] 90.66MB/253.3MB 23:12:21 00b33c871d26 Downloading [=================> ] 90.66MB/253.3MB 23:12:21 b358088867e5 Extracting [=================================> ] 119.2MB/180.3MB 23:12:21 7a1cb9ad7f75 Downloading [========> ] 19.35MB/115.2MB 23:12:21 acaa2331ed73 Extracting [===> ] 6.128MB/84.46MB 23:12:21 f6d077cd6629 Extracting [=======> ] 7.864MB/50.34MB 23:12:21 fb4c47760659 Extracting [======> ] 13.37MB/98.32MB 23:12:21 22ebf0e44c85 Extracting [===================> ] 14.16MB/37.02MB 23:12:21 22ebf0e44c85 Extracting [===================> ] 14.16MB/37.02MB 23:12:21 d93f69e96600 Downloading [> ] 538.9kB/115.2MB 23:12:21 00b33c871d26 Downloading [===================> ] 99.28MB/253.3MB 23:12:21 00b33c871d26 Downloading [===================> ] 99.28MB/253.3MB 23:12:21 4d8b5d34b1ef Extracting [===> ] 262.1kB/4.333MB 23:12:21 73f2dcbe3502 Pull complete 23:12:21 1a8530682f8a Extracting [==================================================>] 1.041kB/1.041kB 23:12:21 1a8530682f8a Extracting [==================================================>] 1.041kB/1.041kB 23:12:21 7a1cb9ad7f75 Downloading [==============> ] 32.26MB/115.2MB 23:12:21 acaa2331ed73 Extracting [========> ] 13.93MB/84.46MB 23:12:21 f6d077cd6629 Extracting [=========> ] 9.437MB/50.34MB 23:12:21 b358088867e5 Extracting [=================================> ] 122MB/180.3MB 23:12:21 22ebf0e44c85 Extracting [=======================> ] 17.3MB/37.02MB 23:12:21 22ebf0e44c85 Extracting [=======================> ] 17.3MB/37.02MB 23:12:21 fb4c47760659 Extracting [========> ] 17.27MB/98.32MB 23:12:21 d93f69e96600 Downloading [====> ] 10.23MB/115.2MB 23:12:21 397a918c7da3 Pull complete 23:12:21 4d8b5d34b1ef Extracting [========================================> ] 3.473MB/4.333MB 23:12:21 00b33c871d26 Downloading [=====================> ] 110MB/253.3MB 23:12:21 00b33c871d26 Downloading [=====================> ] 110MB/253.3MB 23:12:21 4d8b5d34b1ef Extracting [==================================================>] 4.333MB/4.333MB 23:12:21 7a1cb9ad7f75 Downloading [=================> ] 41.36MB/115.2MB 23:12:21 acaa2331ed73 Extracting [===========> ] 19.5MB/84.46MB 23:12:21 f6d077cd6629 Extracting [===========> ] 11.53MB/50.34MB 23:12:21 1a8530682f8a Pull complete 23:12:21 b358088867e5 Extracting [==================================> ] 124.8MB/180.3MB 23:12:21 bf1933dc24dc Extracting [==================================================>] 1.039kB/1.039kB 23:12:21 bf1933dc24dc Extracting [==================================================>] 1.039kB/1.039kB 23:12:21 22ebf0e44c85 Extracting [===========================> ] 20.05MB/37.02MB 23:12:21 22ebf0e44c85 Extracting [===========================> ] 20.05MB/37.02MB 23:12:21 fb4c47760659 Extracting [===========> ] 21.73MB/98.32MB 23:12:21 d93f69e96600 Downloading [=========> ] 22MB/115.2MB 23:12:21 00b33c871d26 Downloading [=======================> ] 120.2MB/253.3MB 23:12:21 00b33c871d26 Downloading [=======================> ] 120.2MB/253.3MB 23:12:21 7a1cb9ad7f75 Downloading [======================> ] 51.58MB/115.2MB 23:12:21 4d8b5d34b1ef Pull complete 23:12:21 ea2f71d64768 Extracting [===================================> ] 32.77kB/46.31kB 23:12:21 acaa2331ed73 Extracting [==============> ] 24.51MB/84.46MB 23:12:21 b358088867e5 Extracting [==================================> ] 125.9MB/180.3MB 23:12:21 d93f69e96600 Downloading [===============> ] 34.87MB/115.2MB 23:12:21 f6d077cd6629 Extracting [=============> ] 13.63MB/50.34MB 23:12:21 22ebf0e44c85 Extracting [===============================> ] 23.2MB/37.02MB 23:12:21 22ebf0e44c85 Extracting [===============================> ] 23.2MB/37.02MB 23:12:21 ea2f71d64768 Extracting [==================================================>] 46.31kB/46.31kB 23:12:21 806be17e856d Extracting [> ] 557.1kB/89.72MB 23:12:21 00b33c871d26 Downloading [=========================> ] 130.9MB/253.3MB 23:12:21 00b33c871d26 Downloading [=========================> ] 130.9MB/253.3MB 23:12:21 fb4c47760659 Extracting [=============> ] 26.18MB/98.32MB 23:12:21 bf1933dc24dc Pull complete 23:12:21 7a1cb9ad7f75 Downloading [==========================> ] 61.81MB/115.2MB 23:12:21 7868f013c211 Extracting [==================================================>] 13.9kB/13.9kB 23:12:21 7868f013c211 Extracting [==================================================>] 13.9kB/13.9kB 23:12:21 acaa2331ed73 Extracting [=================> ] 28.97MB/84.46MB 23:12:21 22ebf0e44c85 Extracting [=================================> ] 25.17MB/37.02MB 23:12:21 22ebf0e44c85 Extracting [=================================> ] 25.17MB/37.02MB 23:12:21 d93f69e96600 Downloading [=================> ] 41.29MB/115.2MB 23:12:21 b358088867e5 Extracting [===================================> ] 127.6MB/180.3MB 23:12:21 f6d077cd6629 Extracting [===============> ] 15.2MB/50.34MB 23:12:21 00b33c871d26 Downloading [===========================> ] 137.3MB/253.3MB 23:12:21 00b33c871d26 Downloading [===========================> ] 137.3MB/253.3MB 23:12:21 fb4c47760659 Extracting [===============> ] 30.64MB/98.32MB 23:12:21 806be17e856d Extracting [=> ] 2.785MB/89.72MB 23:12:21 ea2f71d64768 Pull complete 23:12:21 7a1cb9ad7f75 Downloading [================================> ] 74.72MB/115.2MB 23:12:21 2d8d8a45d8d1 Extracting [==================================================>] 22.97kB/22.97kB 23:12:21 acaa2331ed73 Extracting [====================> ] 34.54MB/84.46MB 23:12:21 2d8d8a45d8d1 Extracting [==================================================>] 22.97kB/22.97kB 23:12:22 22ebf0e44c85 Extracting [======================================> ] 28.31MB/37.02MB 23:12:22 22ebf0e44c85 Extracting [======================================> ] 28.31MB/37.02MB 23:12:22 d93f69e96600 Downloading [=======================> ] 53.63MB/115.2MB 23:12:22 00b33c871d26 Downloading [=============================> ] 149.6MB/253.3MB 23:12:22 00b33c871d26 Downloading [=============================> ] 149.6MB/253.3MB 23:12:22 b358088867e5 Extracting [====================================> ] 129.8MB/180.3MB 23:12:22 fb4c47760659 Extracting [==================> ] 36.21MB/98.32MB 23:12:22 f6d077cd6629 Extracting [=================> ] 17.3MB/50.34MB 23:12:22 806be17e856d Extracting [==> ] 4.456MB/89.72MB 23:12:22 7868f013c211 Pull complete 23:12:22 5a89213878cf Extracting [==================================================>] 13.78kB/13.78kB 23:12:22 7a1cb9ad7f75 Downloading [======================================> ] 89.29MB/115.2MB 23:12:22 5a89213878cf Extracting [==================================================>] 13.78kB/13.78kB 23:12:22 acaa2331ed73 Extracting [=======================> ] 38.99MB/84.46MB 23:12:22 22ebf0e44c85 Extracting [=========================================> ] 30.67MB/37.02MB 23:12:22 22ebf0e44c85 Extracting [=========================================> ] 30.67MB/37.02MB 23:12:22 d93f69e96600 Downloading [===========================> ] 63.27MB/115.2MB 23:12:22 00b33c871d26 Downloading [===============================> ] 160.9MB/253.3MB 23:12:22 00b33c871d26 Downloading [===============================> ] 160.9MB/253.3MB 23:12:22 b358088867e5 Extracting [====================================> ] 131.5MB/180.3MB 23:12:22 fb4c47760659 Extracting [======================> ] 43.45MB/98.32MB 23:12:22 2d8d8a45d8d1 Pull complete 23:12:22 f6d077cd6629 Extracting [==================> ] 18.35MB/50.34MB 23:12:22 806be17e856d Extracting [===> ] 6.685MB/89.72MB 23:12:22 7a1cb9ad7f75 Downloading [===========================================> ] 101.1MB/115.2MB 23:12:22 acaa2331ed73 Extracting [==========================> ] 44.56MB/84.46MB 23:12:22 d93f69e96600 Downloading [==============================> ] 70.8MB/115.2MB 23:12:22 22ebf0e44c85 Extracting [=============================================> ] 33.42MB/37.02MB 23:12:22 22ebf0e44c85 Extracting [=============================================> ] 33.42MB/37.02MB 23:12:22 00b33c871d26 Downloading [=================================> ] 171.6MB/253.3MB 23:12:22 00b33c871d26 Downloading [=================================> ] 171.6MB/253.3MB 23:12:22 b358088867e5 Extracting [=====================================> ] 134.3MB/180.3MB 23:12:22 fb4c47760659 Extracting [========================> ] 49.02MB/98.32MB 23:12:22 f6d077cd6629 Extracting [=====================> ] 21.5MB/50.34MB 23:12:22 7a1cb9ad7f75 Downloading [===============================================> ] 109.7MB/115.2MB 23:12:22 806be17e856d Extracting [====> ] 7.799MB/89.72MB 23:12:22 acaa2331ed73 Extracting [==============================> ] 50.69MB/84.46MB 23:12:22 00b33c871d26 Downloading [===================================> ] 180.2MB/253.3MB 23:12:22 00b33c871d26 Downloading [===================================> ] 180.2MB/253.3MB 23:12:22 d93f69e96600 Downloading [==================================> ] 80.41MB/115.2MB 23:12:22 ab976f46af30 Extracting [> ] 557.1kB/60.66MB 23:12:22 fb4c47760659 Extracting [============================> ] 55.15MB/98.32MB 23:12:22 b358088867e5 Extracting [=====================================> ] 136.5MB/180.3MB 23:12:22 22ebf0e44c85 Extracting [==============================================> ] 34.21MB/37.02MB 23:12:22 22ebf0e44c85 Extracting [==============================================> ] 34.21MB/37.02MB 23:12:22 f6d077cd6629 Extracting [======================> ] 23.07MB/50.34MB 23:12:22 7a1cb9ad7f75 Downloading [=================================================> ] 115.1MB/115.2MB 23:12:22 7a1cb9ad7f75 Verifying Checksum 23:12:22 7a1cb9ad7f75 Download complete 23:12:22 806be17e856d Extracting [=====> ] 10.03MB/89.72MB 23:12:22 acaa2331ed73 Extracting [==================================> ] 57.93MB/84.46MB 23:12:22 d93f69e96600 Downloading [=====================================> ] 87.39MB/115.2MB 23:12:22 ab976f46af30 Extracting [==> ] 2.785MB/60.66MB 23:12:22 00b33c871d26 Downloading [====================================> ] 187.2MB/253.3MB 23:12:22 00b33c871d26 Downloading [====================================> ] 187.2MB/253.3MB 23:12:22 5a89213878cf Pull complete 23:12:22 fb4c47760659 Extracting [===============================> ] 61.83MB/98.32MB 23:12:22 6d088ee3e534 Extracting [==================================================>] 2.858kB/2.858kB 23:12:22 6d088ee3e534 Extracting [==================================================>] 2.858kB/2.858kB 23:12:22 b358088867e5 Extracting [======================================> ] 138.7MB/180.3MB 23:12:22 f6d077cd6629 Extracting [========================> ] 25.17MB/50.34MB 23:12:22 22ebf0e44c85 Extracting [================================================> ] 35.78MB/37.02MB 23:12:22 22ebf0e44c85 Extracting [================================================> ] 35.78MB/37.02MB 23:12:22 806be17e856d Extracting [======> ] 11.7MB/89.72MB 23:12:22 acaa2331ed73 Extracting [======================================> ] 64.62MB/84.46MB 23:12:22 22ebf0e44c85 Extracting [==================================================>] 37.02MB/37.02MB 23:12:22 22ebf0e44c85 Extracting [==================================================>] 37.02MB/37.02MB 23:12:22 ab976f46af30 Extracting [===> ] 4.456MB/60.66MB 23:12:22 00b33c871d26 Downloading [======================================> ] 197.4MB/253.3MB 23:12:22 00b33c871d26 Downloading [======================================> ] 197.4MB/253.3MB 23:12:22 bbb9d15c45a1 Downloading [=========> ] 719B/3.633kB 23:12:22 bbb9d15c45a1 Downloading [==================================================>] 3.633kB/3.633kB 23:12:22 bbb9d15c45a1 Verifying Checksum 23:12:22 bbb9d15c45a1 Download complete 23:12:22 d93f69e96600 Downloading [========================================> ] 92.76MB/115.2MB 23:12:22 fb4c47760659 Extracting [=================================> ] 66.85MB/98.32MB 23:12:22 b358088867e5 Extracting [=======================================> ] 141.5MB/180.3MB 23:12:22 f6d077cd6629 Extracting [===============================> ] 31.98MB/50.34MB 23:12:22 806be17e856d Extracting [=======> ] 13.93MB/89.72MB 23:12:22 acaa2331ed73 Extracting [=========================================> ] 70.75MB/84.46MB 23:12:22 d93f69e96600 Downloading [============================================> ] 102.4MB/115.2MB 23:12:22 00b33c871d26 Downloading [=========================================> ] 210.3MB/253.3MB 23:12:22 00b33c871d26 Downloading [=========================================> ] 210.3MB/253.3MB 23:12:22 22ebf0e44c85 Pull complete 23:12:22 22ebf0e44c85 Pull complete 23:12:22 ab976f46af30 Extracting [======> ] 7.799MB/60.66MB 23:12:22 6d088ee3e534 Pull complete 23:12:22 fb4c47760659 Extracting [====================================> ] 71.86MB/98.32MB 23:12:22 39f770e5feb8 Extracting [==================================================>] 2.867kB/2.867kB 23:12:22 39f770e5feb8 Extracting [==================================================>] 2.867kB/2.867kB 23:12:22 b358088867e5 Extracting [=======================================> ] 142.6MB/180.3MB 23:12:22 f6d077cd6629 Extracting [======================================> ] 38.8MB/50.34MB 23:12:22 806be17e856d Extracting [=========> ] 16.71MB/89.72MB 23:12:22 acaa2331ed73 Extracting [============================================> ] 75.2MB/84.46MB 23:12:22 d93f69e96600 Verifying Checksum 23:12:22 d93f69e96600 Download complete 23:12:22 00b33c871d26 Downloading [============================================> ] 223.2MB/253.3MB 23:12:22 00b33c871d26 Downloading [============================================> ] 223.2MB/253.3MB 23:12:22 ab976f46af30 Extracting [========> ] 10.03MB/60.66MB 23:12:22 fb4c47760659 Extracting [=======================================> ] 78.54MB/98.32MB 23:12:22 f6d077cd6629 Extracting [=============================================> ] 45.61MB/50.34MB 23:12:22 b358088867e5 Extracting [========================================> ] 145.4MB/180.3MB 23:12:22 acaa2331ed73 Extracting [=================================================> ] 83MB/84.46MB 23:12:22 806be17e856d Extracting [==========> ] 19.5MB/89.72MB 23:12:22 39f770e5feb8 Pull complete 23:12:22 acaa2331ed73 Extracting [==================================================>] 84.46MB/84.46MB 23:12:22 00b33c871d26 Downloading [==============================================> ] 234.5MB/253.3MB 23:12:22 00b33c871d26 Downloading [==============================================> ] 234.5MB/253.3MB 23:12:22 fb4c47760659 Extracting [============================================> ] 88.01MB/98.32MB 23:12:22 b358088867e5 Extracting [=========================================> ] 148.2MB/180.3MB 23:12:22 policy-db-migrator Pulled 23:12:22 ab976f46af30 Extracting [=========> ] 11.7MB/60.66MB 23:12:22 f6d077cd6629 Extracting [=================================================> ] 49.81MB/50.34MB 23:12:22 acaa2331ed73 Pull complete 23:12:22 b3f29df2fabd Extracting [==================================================>] 1.118kB/1.118kB 23:12:22 b3f29df2fabd Extracting [==================================================>] 1.118kB/1.118kB 23:12:22 806be17e856d Extracting [============> ] 21.73MB/89.72MB 23:12:22 00b33c871d26 Downloading [===============================================> ] 243MB/253.3MB 23:12:22 00b33c871d26 Downloading [===============================================> ] 243MB/253.3MB 23:12:22 fb4c47760659 Extracting [================================================> ] 95.26MB/98.32MB 23:12:23 fb4c47760659 Extracting [==================================================>] 98.32MB/98.32MB 23:12:23 b358088867e5 Extracting [=========================================> ] 151MB/180.3MB 23:12:23 ab976f46af30 Extracting [===========> ] 13.93MB/60.66MB 23:12:23 f6d077cd6629 Extracting [=================================================> ] 50.33MB/50.34MB 23:12:23 f6d077cd6629 Extracting [==================================================>] 50.34MB/50.34MB 23:12:23 00b33c871d26 Verifying Checksum 23:12:23 00b33c871d26 Verifying Checksum 23:12:23 00b33c871d26 Download complete 23:12:23 00b33c871d26 Download complete 23:12:23 fb4c47760659 Pull complete 23:12:23 4cf448219b85 Extracting [==================================================>] 1.296kB/1.296kB 23:12:23 4cf448219b85 Extracting [==================================================>] 1.296kB/1.296kB 23:12:23 806be17e856d Extracting [=============> ] 25.07MB/89.72MB 23:12:23 f6d077cd6629 Pull complete 23:12:23 b3f29df2fabd Pull complete 23:12:23 d6c6c26dc98a Extracting [==================================================>] 605B/605B 23:12:23 74d0aa5cd96f Downloading [> ] 539.6kB/73.93MB 23:12:23 api Pulled 23:12:23 d6c6c26dc98a Extracting [==================================================>] 605B/605B 23:12:23 b358088867e5 Extracting [==========================================> ] 154.9MB/180.3MB 23:12:23 ab976f46af30 Extracting [==============> ] 17.83MB/60.66MB 23:12:23 806be17e856d Extracting [===============> ] 27.3MB/89.72MB 23:12:23 00b33c871d26 Extracting [> ] 557.1kB/253.3MB 23:12:23 00b33c871d26 Extracting [> ] 557.1kB/253.3MB 23:12:23 c9ab9764793f Downloading [==================================================>] 301B/301B 23:12:23 c9ab9764793f Verifying Checksum 23:12:23 c9ab9764793f Download complete 23:12:23 74d0aa5cd96f Downloading [=====> ] 8.65MB/73.93MB 23:12:23 4cf448219b85 Pull complete 23:12:23 ab976f46af30 Extracting [=================> ] 21.17MB/60.66MB 23:12:23 7645ed8cef64 Downloading [> ] 539.6kB/159.1MB 23:12:23 b358088867e5 Extracting [===========================================> ] 158.2MB/180.3MB 23:12:23 pap Pulled 23:12:23 d6c6c26dc98a Pull complete 23:12:23 60290e82ca2c Extracting [==================================================>] 2.679kB/2.679kB 23:12:23 60290e82ca2c Extracting [==================================================>] 2.679kB/2.679kB 23:12:23 00b33c871d26 Extracting [=> ] 7.799MB/253.3MB 23:12:23 00b33c871d26 Extracting [=> ] 7.799MB/253.3MB 23:12:23 806be17e856d Extracting [================> ] 28.97MB/89.72MB 23:12:23 74d0aa5cd96f Downloading [============> ] 18.92MB/73.93MB 23:12:23 7645ed8cef64 Downloading [=> ] 5.946MB/159.1MB 23:12:23 ab976f46af30 Extracting [===================> ] 23.95MB/60.66MB 23:12:23 b358088867e5 Extracting [============================================> ] 161.5MB/180.3MB 23:12:23 00b33c871d26 Extracting [===> ] 19.5MB/253.3MB 23:12:23 00b33c871d26 Extracting [===> ] 19.5MB/253.3MB 23:12:23 8e028879fd2e Downloading [==================================================>] 1.149kB/1.149kB 23:12:23 8e028879fd2e Download complete 23:12:23 806be17e856d Extracting [=================> ] 31.2MB/89.72MB 23:12:23 74d0aa5cd96f Downloading [====================> ] 30.82MB/73.93MB 23:12:23 7645ed8cef64 Downloading [====> ] 14.06MB/159.1MB 23:12:23 b358088867e5 Extracting [=============================================> ] 163.8MB/180.3MB 23:12:23 74d0aa5cd96f Downloading [========================> ] 35.68MB/73.93MB 23:12:23 00b33c871d26 Extracting [====> ] 24.51MB/253.3MB 23:12:23 00b33c871d26 Extracting [====> ] 24.51MB/253.3MB 23:12:23 ab976f46af30 Extracting [=====================> ] 26.18MB/60.66MB 23:12:23 60290e82ca2c Pull complete 23:12:23 78605ea207be Extracting [==================================================>] 3.089kB/3.089kB 23:12:23 78605ea207be Extracting [==================================================>] 3.089kB/3.089kB 23:12:23 7645ed8cef64 Downloading [=====> ] 16.76MB/159.1MB 23:12:23 fd153c39a15f Downloading [==================================================>] 1.123kB/1.123kB 23:12:23 fd153c39a15f Verifying Checksum 23:12:23 fd153c39a15f Download complete 23:12:23 806be17e856d Extracting [==================> ] 32.87MB/89.72MB 23:12:23 b358088867e5 Extracting [=============================================> ] 164.3MB/180.3MB 23:12:23 74d0aa5cd96f Downloading [==================================> ] 50.28MB/73.93MB 23:12:23 ab976f46af30 Extracting [======================> ] 27.85MB/60.66MB 23:12:23 00b33c871d26 Extracting [=====> ] 27.85MB/253.3MB 23:12:23 00b33c871d26 Extracting [=====> ] 27.85MB/253.3MB 23:12:23 7645ed8cef64 Downloading [========> ] 25.95MB/159.1MB 23:12:23 806be17e856d Extracting [===================> ] 35.09MB/89.72MB 23:12:23 b358088867e5 Extracting [=============================================> ] 165.4MB/180.3MB 23:12:23 74d0aa5cd96f Downloading [============================================> ] 65.42MB/73.93MB 23:12:23 00b33c871d26 Extracting [=====> ] 30.08MB/253.3MB 23:12:23 ab976f46af30 Extracting [========================> ] 30.08MB/60.66MB 23:12:23 00b33c871d26 Extracting [=====> ] 30.08MB/253.3MB 23:12:23 7645ed8cef64 Downloading [===========> ] 35.68MB/159.1MB 23:12:23 74d0aa5cd96f Verifying Checksum 23:12:23 74d0aa5cd96f Download complete 23:12:23 806be17e856d Extracting [=====================> ] 37.88MB/89.72MB 23:12:23 b358088867e5 Extracting [===============================================> ] 171MB/180.3MB 23:12:23 ab976f46af30 Extracting [============================> ] 33.98MB/60.66MB 23:12:23 7645ed8cef64 Downloading [=============> ] 43.79MB/159.1MB 23:12:23 00b33c871d26 Extracting [=======> ] 37.88MB/253.3MB 23:12:23 00b33c871d26 Extracting [=======> ] 37.88MB/253.3MB 23:12:23 7645ed8cef64 Downloading [===============> ] 48.12MB/159.1MB 23:12:23 806be17e856d Extracting [======================> ] 39.55MB/89.72MB 23:12:23 b358088867e5 Extracting [===============================================> ] 172.1MB/180.3MB 23:12:23 00b33c871d26 Extracting [=======> ] 39.55MB/253.3MB 23:12:23 00b33c871d26 Extracting [=======> ] 39.55MB/253.3MB 23:12:23 ab976f46af30 Extracting [============================> ] 35.09MB/60.66MB 23:12:24 74d0aa5cd96f Extracting [> ] 557.1kB/73.93MB 23:12:24 806be17e856d Extracting [=======================> ] 41.78MB/89.72MB 23:12:24 7645ed8cef64 Downloading [====================> ] 63.8MB/159.1MB 23:12:24 00b33c871d26 Extracting [========> ] 45.12MB/253.3MB 23:12:24 00b33c871d26 Extracting [========> ] 45.12MB/253.3MB 23:12:24 74d0aa5cd96f Extracting [> ] 1.114MB/73.93MB 23:12:24 ab976f46af30 Extracting [==============================> ] 37.32MB/60.66MB 23:12:24 7645ed8cef64 Downloading [====================> ] 64.88MB/159.1MB 23:12:24 00b33c871d26 Extracting [=========> ] 46.79MB/253.3MB 23:12:24 00b33c871d26 Extracting [=========> ] 46.79MB/253.3MB 23:12:24 806be17e856d Extracting [=======================> ] 42.34MB/89.72MB 23:12:24 b358088867e5 Extracting [================================================> ] 173.8MB/180.3MB 23:12:24 7645ed8cef64 Downloading [=======================> ] 73.53MB/159.1MB 23:12:24 74d0aa5cd96f Extracting [==> ] 3.342MB/73.93MB 23:12:24 00b33c871d26 Extracting [==========> ] 51.25MB/253.3MB 23:12:24 00b33c871d26 Extracting [==========> ] 51.25MB/253.3MB 23:12:24 b358088867e5 Extracting [================================================> ] 174.9MB/180.3MB 23:12:24 ab976f46af30 Extracting [================================> ] 38.99MB/60.66MB 23:12:24 806be17e856d Extracting [========================> ] 44.01MB/89.72MB 23:12:24 78605ea207be Pull complete 23:12:24 00b33c871d26 Extracting [===========> ] 59.05MB/253.3MB 23:12:24 00b33c871d26 Extracting [===========> ] 59.05MB/253.3MB 23:12:24 7645ed8cef64 Downloading [=========================> ] 82.18MB/159.1MB 23:12:24 ab976f46af30 Extracting [==================================> ] 42.34MB/60.66MB 23:12:24 74d0aa5cd96f Extracting [====> ] 6.685MB/73.93MB 23:12:24 b358088867e5 Extracting [================================================> ] 176.6MB/180.3MB 23:12:24 806be17e856d Extracting [=========================> ] 45.12MB/89.72MB 23:12:24 00b33c871d26 Extracting [============> ] 64.06MB/253.3MB 23:12:24 00b33c871d26 Extracting [============> ] 64.06MB/253.3MB 23:12:24 7645ed8cef64 Downloading [===========================> ] 88.67MB/159.1MB 23:12:24 ab976f46af30 Extracting [===================================> ] 43.45MB/60.66MB 23:12:24 74d0aa5cd96f Extracting [=====> ] 7.799MB/73.93MB 23:12:24 b358088867e5 Extracting [=================================================> ] 177.1MB/180.3MB 23:12:24 806be17e856d Extracting [=========================> ] 46.24MB/89.72MB 23:12:24 869e11012e0e Extracting [==================================================>] 4.023kB/4.023kB 23:12:24 869e11012e0e Extracting [==================================================>] 4.023kB/4.023kB 23:12:24 00b33c871d26 Extracting [=============> ] 68.52MB/253.3MB 23:12:24 00b33c871d26 Extracting [=============> ] 68.52MB/253.3MB 23:12:24 7645ed8cef64 Downloading [==============================> ] 97.32MB/159.1MB 23:12:24 b358088867e5 Extracting [=================================================> ] 178.8MB/180.3MB 23:12:24 74d0aa5cd96f Extracting [======> ] 10.03MB/73.93MB 23:12:24 ab976f46af30 Extracting [====================================> ] 44.56MB/60.66MB 23:12:24 806be17e856d Extracting [==========================> ] 47.35MB/89.72MB 23:12:24 7645ed8cef64 Downloading [=================================> ] 107.6MB/159.1MB 23:12:24 00b33c871d26 Extracting [==============> ] 74.09MB/253.3MB 23:12:24 00b33c871d26 Extracting [==============> ] 74.09MB/253.3MB 23:12:24 ab976f46af30 Extracting [=======================================> ] 47.35MB/60.66MB 23:12:24 74d0aa5cd96f Extracting [=========> ] 13.37MB/73.93MB 23:12:24 806be17e856d Extracting [============================> ] 50.69MB/89.72MB 23:12:24 b358088867e5 Extracting [==================================================>] 180.3MB/180.3MB 23:12:25 806be17e856d Extracting [============================> ] 51.81MB/89.72MB 23:12:25 7645ed8cef64 Downloading [=====================================> ] 118.9MB/159.1MB 23:12:25 ab976f46af30 Extracting [========================================> ] 49.58MB/60.66MB 23:12:25 74d0aa5cd96f Extracting [==========> ] 15.04MB/73.93MB 23:12:25 00b33c871d26 Extracting [================> ] 82.44MB/253.3MB 23:12:25 00b33c871d26 Extracting [================> ] 82.44MB/253.3MB 23:12:25 806be17e856d Extracting [==============================> ] 54.59MB/89.72MB 23:12:25 7645ed8cef64 Downloading [=========================================> ] 132.5MB/159.1MB 23:12:25 74d0aa5cd96f Extracting [============> ] 18.38MB/73.93MB 23:12:25 ab976f46af30 Extracting [===========================================> ] 52.36MB/60.66MB 23:12:25 00b33c871d26 Extracting [=================> ] 89.13MB/253.3MB 23:12:25 00b33c871d26 Extracting [=================> ] 89.13MB/253.3MB 23:12:25 806be17e856d Extracting [===============================> ] 57.38MB/89.72MB 23:12:25 7645ed8cef64 Downloading [===========================================> ] 139MB/159.1MB 23:12:25 74d0aa5cd96f Extracting [=============> ] 20.61MB/73.93MB 23:12:25 ab976f46af30 Extracting [=============================================> ] 54.59MB/60.66MB 23:12:25 00b33c871d26 Extracting [==================> ] 94.14MB/253.3MB 23:12:25 00b33c871d26 Extracting [==================> ] 94.14MB/253.3MB 23:12:25 7645ed8cef64 Downloading [================================================> ] 155.2MB/159.1MB 23:12:25 7645ed8cef64 Verifying Checksum 23:12:25 7645ed8cef64 Download complete 23:12:25 74d0aa5cd96f Extracting [===============> ] 22.28MB/73.93MB 23:12:25 00b33c871d26 Extracting [====================> ] 103.1MB/253.3MB 23:12:25 00b33c871d26 Extracting [====================> ] 103.1MB/253.3MB 23:12:25 ab976f46af30 Extracting [=============================================> ] 55.15MB/60.66MB 23:12:25 806be17e856d Extracting [=================================> ] 59.6MB/89.72MB 23:12:25 74d0aa5cd96f Extracting [===============> ] 23.4MB/73.93MB 23:12:25 00b33c871d26 Extracting [====================> ] 104.2MB/253.3MB 23:12:25 00b33c871d26 Extracting [====================> ] 104.2MB/253.3MB 23:12:25 ab976f46af30 Extracting [==============================================> ] 56.26MB/60.66MB 23:12:25 806be17e856d Extracting [=================================> ] 60.16MB/89.72MB 23:12:25 74d0aa5cd96f Extracting [===================> ] 28.41MB/73.93MB 23:12:25 ab976f46af30 Extracting [================================================> ] 58.49MB/60.66MB 23:12:26 ab976f46af30 Extracting [==================================================>] 60.66MB/60.66MB 23:12:26 74d0aa5cd96f Extracting [======================> ] 33.98MB/73.93MB 23:12:26 74d0aa5cd96f Extracting [===========================> ] 40.11MB/73.93MB 23:12:26 00b33c871d26 Extracting [=====================> ] 106.4MB/253.3MB 23:12:26 00b33c871d26 Extracting [=====================> ] 106.4MB/253.3MB 23:12:26 806be17e856d Extracting [==================================> ] 61.83MB/89.72MB 23:12:26 74d0aa5cd96f Extracting [==============================> ] 44.56MB/73.93MB 23:12:26 00b33c871d26 Extracting [======================> ] 113.1MB/253.3MB 23:12:26 00b33c871d26 Extracting [======================> ] 113.1MB/253.3MB 23:12:26 806be17e856d Extracting [=====================================> ] 66.85MB/89.72MB 23:12:26 74d0aa5cd96f Extracting [=================================> ] 49.02MB/73.93MB 23:12:26 806be17e856d Extracting [======================================> ] 69.07MB/89.72MB 23:12:26 00b33c871d26 Extracting [=======================> ] 118.1MB/253.3MB 23:12:26 00b33c871d26 Extracting [=======================> ] 118.1MB/253.3MB 23:12:26 869e11012e0e Pull complete 23:12:26 74d0aa5cd96f Extracting [===================================> ] 52.36MB/73.93MB 23:12:26 806be17e856d Extracting [=======================================> ] 70.19MB/89.72MB 23:12:26 00b33c871d26 Extracting [=======================> ] 119.2MB/253.3MB 23:12:26 00b33c871d26 Extracting [=======================> ] 119.2MB/253.3MB 23:12:26 74d0aa5cd96f Extracting [======================================> ] 57.38MB/73.93MB 23:12:26 806be17e856d Extracting [========================================> ] 73.53MB/89.72MB 23:12:26 00b33c871d26 Extracting [========================> ] 122.6MB/253.3MB 23:12:26 00b33c871d26 Extracting [========================> ] 122.6MB/253.3MB 23:12:26 74d0aa5cd96f Extracting [==========================================> ] 62.95MB/73.93MB 23:12:26 806be17e856d Extracting [===========================================> ] 77.43MB/89.72MB 23:12:26 00b33c871d26 Extracting [=========================> ] 127MB/253.3MB 23:12:26 00b33c871d26 Extracting [=========================> ] 127MB/253.3MB 23:12:26 74d0aa5cd96f Extracting [==============================================> ] 68.52MB/73.93MB 23:12:26 00b33c871d26 Extracting [=========================> ] 130.9MB/253.3MB 23:12:26 00b33c871d26 Extracting [=========================> ] 130.9MB/253.3MB 23:12:26 806be17e856d Extracting [============================================> ] 80.22MB/89.72MB 23:12:26 74d0aa5cd96f Extracting [=================================================> ] 73.53MB/73.93MB 23:12:26 00b33c871d26 Extracting [==========================> ] 134.3MB/253.3MB 23:12:26 00b33c871d26 Extracting [==========================> ] 134.3MB/253.3MB 23:12:27 806be17e856d Extracting [==============================================> ] 83.56MB/89.72MB 23:12:27 74d0aa5cd96f Extracting [==================================================>] 73.93MB/73.93MB 23:12:27 806be17e856d Extracting [==============================================> ] 84.12MB/89.72MB 23:12:27 00b33c871d26 Extracting [==========================> ] 136.5MB/253.3MB 23:12:27 00b33c871d26 Extracting [==========================> ] 136.5MB/253.3MB 23:12:27 806be17e856d Extracting [===============================================> ] 84.67MB/89.72MB 23:12:27 00b33c871d26 Extracting [===========================> ] 140.4MB/253.3MB 23:12:27 00b33c871d26 Extracting [===========================> ] 140.4MB/253.3MB 23:12:27 806be17e856d Extracting [================================================> ] 86.9MB/89.72MB 23:12:27 00b33c871d26 Extracting [============================> ] 143.2MB/253.3MB 23:12:27 00b33c871d26 Extracting [============================> ] 143.2MB/253.3MB 23:12:27 00b33c871d26 Extracting [============================> ] 145.9MB/253.3MB 23:12:27 00b33c871d26 Extracting [============================> ] 145.9MB/253.3MB 23:12:27 806be17e856d Extracting [=================================================> ] 89.13MB/89.72MB 23:12:28 806be17e856d Extracting [==================================================>] 89.72MB/89.72MB 23:12:28 00b33c871d26 Extracting [=============================> ] 149.8MB/253.3MB 23:12:28 00b33c871d26 Extracting [=============================> ] 149.8MB/253.3MB 23:12:28 00b33c871d26 Extracting [==============================> ] 154.3MB/253.3MB 23:12:28 00b33c871d26 Extracting [==============================> ] 154.3MB/253.3MB 23:12:28 00b33c871d26 Extracting [===============================> ] 158.2MB/253.3MB 23:12:28 00b33c871d26 Extracting [===============================> ] 158.2MB/253.3MB 23:12:28 00b33c871d26 Extracting [================================> ] 162.7MB/253.3MB 23:12:28 00b33c871d26 Extracting [================================> ] 162.7MB/253.3MB 23:12:28 00b33c871d26 Extracting [=================================> ] 168.8MB/253.3MB 23:12:28 00b33c871d26 Extracting [=================================> ] 168.8MB/253.3MB 23:12:28 b358088867e5 Pull complete 23:12:28 c4426427fcc3 Extracting [==================================================>] 1.44kB/1.44kB 23:12:28 c4426427fcc3 Extracting [==================================================>] 1.44kB/1.44kB 23:12:28 00b33c871d26 Extracting [=================================> ] 171MB/253.3MB 23:12:28 00b33c871d26 Extracting [=================================> ] 171MB/253.3MB 23:12:28 00b33c871d26 Extracting [=================================> ] 172.1MB/253.3MB 23:12:28 00b33c871d26 Extracting [=================================> ] 172.1MB/253.3MB 23:12:28 00b33c871d26 Extracting [==================================> ] 173.2MB/253.3MB 23:12:28 00b33c871d26 Extracting [==================================> ] 173.2MB/253.3MB 23:12:29 00b33c871d26 Extracting [==================================> ] 174.9MB/253.3MB 23:12:29 00b33c871d26 Extracting [==================================> ] 174.9MB/253.3MB 23:12:29 ab976f46af30 Pull complete 23:12:29 89b3be9e3a98 Extracting [===================> ] 32.77kB/84.13kB 23:12:29 89b3be9e3a98 Extracting [==================================================>] 84.13kB/84.13kB 23:12:29 89b3be9e3a98 Extracting [==================================================>] 84.13kB/84.13kB 23:12:29 00b33c871d26 Extracting [==================================> ] 177.1MB/253.3MB 23:12:29 00b33c871d26 Extracting [==================================> ] 177.1MB/253.3MB 23:12:29 00b33c871d26 Extracting [===================================> ] 180.5MB/253.3MB 23:12:29 00b33c871d26 Extracting [===================================> ] 180.5MB/253.3MB 23:12:29 00b33c871d26 Extracting [====================================> ] 185.5MB/253.3MB 23:12:29 00b33c871d26 Extracting [====================================> ] 185.5MB/253.3MB 23:12:29 00b33c871d26 Extracting [=====================================> ] 187.7MB/253.3MB 23:12:29 00b33c871d26 Extracting [=====================================> ] 187.7MB/253.3MB 23:12:30 8eef243a7847 Extracting [> ] 491.5kB/49.06MB 23:12:30 00b33c871d26 Extracting [=====================================> ] 188.8MB/253.3MB 23:12:30 00b33c871d26 Extracting [=====================================> ] 188.8MB/253.3MB 23:12:30 8eef243a7847 Extracting [=> ] 983kB/49.06MB 23:12:30 00b33c871d26 Extracting [=====================================> ] 191.6MB/253.3MB 23:12:30 00b33c871d26 Extracting [=====================================> ] 191.6MB/253.3MB 23:12:30 8eef243a7847 Extracting [=> ] 1.475MB/49.06MB 23:12:30 c4426427fcc3 Pull complete 23:12:30 74d0aa5cd96f Pull complete 23:12:30 806be17e856d Pull complete 23:12:30 89b3be9e3a98 Pull complete 23:12:30 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 23:12:30 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 23:12:30 00b33c871d26 Extracting [======================================> ] 194.4MB/253.3MB 23:12:30 00b33c871d26 Extracting [======================================> ] 194.4MB/253.3MB 23:12:30 8eef243a7847 Extracting [==> ] 1.966MB/49.06MB 23:12:30 00b33c871d26 Extracting [======================================> ] 196.1MB/253.3MB 23:12:30 00b33c871d26 Extracting [======================================> ] 196.1MB/253.3MB 23:12:30 8eef243a7847 Extracting [===> ] 3.441MB/49.06MB 23:12:30 00b33c871d26 Extracting [=======================================> ] 197.8MB/253.3MB 23:12:30 00b33c871d26 Extracting [=======================================> ] 197.8MB/253.3MB 23:12:30 8eef243a7847 Extracting [====> ] 3.932MB/49.06MB 23:12:30 00b33c871d26 Extracting [=======================================> ] 198.9MB/253.3MB 23:12:30 00b33c871d26 Extracting [=======================================> ] 198.9MB/253.3MB 23:12:30 8eef243a7847 Extracting [====> ] 4.424MB/49.06MB 23:12:30 00b33c871d26 Extracting [=======================================> ] 200MB/253.3MB 23:12:31 00b33c871d26 Extracting [=======================================> ] 200MB/253.3MB 23:12:31 8eef243a7847 Extracting [=======> ] 6.881MB/49.06MB 23:12:31 00b33c871d26 Extracting [========================================> ] 202.8MB/253.3MB 23:12:31 00b33c871d26 Extracting [========================================> ] 202.8MB/253.3MB 23:12:31 8eef243a7847 Extracting [========> ] 7.864MB/49.06MB 23:12:31 00b33c871d26 Extracting [========================================> ] 204.4MB/253.3MB 23:12:31 00b33c871d26 Extracting [========================================> ] 204.4MB/253.3MB 23:12:31 8eef243a7847 Extracting [=========> ] 8.847MB/49.06MB 23:12:31 00b33c871d26 Extracting [========================================> ] 206.7MB/253.3MB 23:12:31 00b33c871d26 Extracting [========================================> ] 206.7MB/253.3MB 23:12:31 8eef243a7847 Extracting [==========> ] 9.83MB/49.06MB 23:12:31 00b33c871d26 Extracting [=========================================> ] 209.5MB/253.3MB 23:12:31 00b33c871d26 Extracting [=========================================> ] 209.5MB/253.3MB 23:12:31 8eef243a7847 Extracting [=============> ] 13.27MB/49.06MB 23:12:32 c9ab9764793f Extracting [==================================================>] 301B/301B 23:12:32 c9ab9764793f Extracting [==================================================>] 301B/301B 23:12:32 00b33c871d26 Extracting [==========================================> ] 212.8MB/253.3MB 23:12:32 00b33c871d26 Extracting [==========================================> ] 212.8MB/253.3MB 23:12:32 8eef243a7847 Extracting [==============> ] 14.25MB/49.06MB 23:12:32 8eef243a7847 Extracting [===================> ] 18.68MB/49.06MB 23:12:32 8eef243a7847 Extracting [=======================> ] 22.61MB/49.06MB 23:12:32 8eef243a7847 Extracting [============================> ] 27.53MB/49.06MB 23:12:32 00b33c871d26 Extracting [==========================================> ] 213.4MB/253.3MB 23:12:32 00b33c871d26 Extracting [==========================================> ] 213.4MB/253.3MB 23:12:32 00b33c871d26 Extracting [==========================================> ] 215MB/253.3MB 23:12:32 00b33c871d26 Extracting [==========================================> ] 215MB/253.3MB 23:12:32 8eef243a7847 Extracting [===============================> ] 30.47MB/49.06MB 23:12:32 00b33c871d26 Extracting [===========================================> ] 218.4MB/253.3MB 23:12:32 00b33c871d26 Extracting [===========================================> ] 218.4MB/253.3MB 23:12:32 8eef243a7847 Extracting [==================================> ] 33.91MB/49.06MB 23:12:32 00b33c871d26 Extracting [===========================================> ] 220MB/253.3MB 23:12:32 00b33c871d26 Extracting [===========================================> ] 220MB/253.3MB 23:12:32 8eef243a7847 Extracting [=====================================> ] 36.37MB/49.06MB 23:12:32 00b33c871d26 Extracting [===========================================> ] 222.3MB/253.3MB 23:12:32 00b33c871d26 Extracting [===========================================> ] 222.3MB/253.3MB 23:12:33 8eef243a7847 Extracting [=======================================> ] 38.83MB/49.06MB 23:12:33 8eef243a7847 Extracting [========================================> ] 39.32MB/49.06MB 23:12:33 00b33c871d26 Extracting [============================================> ] 223.9MB/253.3MB 23:12:33 00b33c871d26 Extracting [============================================> ] 223.9MB/253.3MB 23:12:33 8eef243a7847 Extracting [===========================================> ] 42.27MB/49.06MB 23:12:33 00b33c871d26 Extracting [=============================================> ] 228.4MB/253.3MB 23:12:33 00b33c871d26 Extracting [=============================================> ] 228.4MB/253.3MB 23:12:33 00b33c871d26 Extracting [=============================================> ] 231.7MB/253.3MB 23:12:33 00b33c871d26 Extracting [=============================================> ] 231.7MB/253.3MB 23:12:34 00b33c871d26 Extracting [=============================================> ] 232.3MB/253.3MB 23:12:34 00b33c871d26 Extracting [=============================================> ] 232.3MB/253.3MB 23:12:34 8eef243a7847 Extracting [================================================> ] 47.19MB/49.06MB 23:12:34 00b33c871d26 Extracting [=============================================> ] 232.8MB/253.3MB 23:12:34 00b33c871d26 Extracting [=============================================> ] 232.8MB/253.3MB 23:12:34 8eef243a7847 Extracting [================================================> ] 47.68MB/49.06MB 23:12:34 00b33c871d26 Extracting [===============================================> ] 239MB/253.3MB 23:12:34 00b33c871d26 Extracting [===============================================> ] 239MB/253.3MB 23:12:34 8eef243a7847 Extracting [==================================================>] 49.06MB/49.06MB 23:12:34 d247d9811eae Extracting [===========> ] 32.77kB/139.8kB 23:12:34 d247d9811eae Extracting [==================================================>] 139.8kB/139.8kB 23:12:34 d247d9811eae Extracting [==================================================>] 139.8kB/139.8kB 23:12:34 634de6c90876 Pull complete 23:12:34 9ddffcaebea1 Extracting [==================================================>] 92B/92B 23:12:34 9ddffcaebea1 Extracting [==================================================>] 92B/92B 23:12:34 00b33c871d26 Extracting [===============================================> ] 241.2MB/253.3MB 23:12:34 00b33c871d26 Extracting [===============================================> ] 241.2MB/253.3MB 23:12:35 00b33c871d26 Extracting [=================================================> ] 249MB/253.3MB 23:12:35 00b33c871d26 Extracting [=================================================> ] 249MB/253.3MB 23:12:35 00b33c871d26 Extracting [==================================================>] 253.3MB/253.3MB 23:12:35 00b33c871d26 Extracting [==================================================>] 253.3MB/253.3MB 23:12:36 c9ab9764793f Pull complete 23:12:36 cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB 23:12:36 cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB 23:12:38 8eef243a7847 Pull complete 23:12:38 00b33c871d26 Pull complete 23:12:38 00b33c871d26 Pull complete 23:12:38 d247d9811eae Pull complete 23:12:38 9ddffcaebea1 Pull complete 23:12:38 7645ed8cef64 Extracting [> ] 557.1kB/159.1MB 23:12:38 5d8ca4014ed0 Extracting [==================================================>] 11.92kB/11.92kB 23:12:38 5d8ca4014ed0 Extracting [==================================================>] 11.92kB/11.92kB 23:12:38 cd00854cfb1a Pull complete 23:12:38 7645ed8cef64 Extracting [==> ] 9.47MB/159.1MB 23:12:38 6a634b17ba79 Extracting [==================================================>] 90B/90B 23:12:38 6a634b17ba79 Extracting [==================================================>] 90B/90B 23:12:38 f1fb904ca1b9 Extracting [==================================================>] 100B/100B 23:12:38 f1fb904ca1b9 Extracting [==================================================>] 100B/100B 23:12:38 6b11e56702ad Extracting [> ] 98.3kB/7.707MB 23:12:38 6b11e56702ad Extracting [> ] 98.3kB/7.707MB 23:12:38 mariadb Pulled 23:12:38 7645ed8cef64 Extracting [======> ] 19.5MB/159.1MB 23:12:38 6b11e56702ad Extracting [===> ] 491.5kB/7.707MB 23:12:38 6b11e56702ad Extracting [===> ] 491.5kB/7.707MB 23:12:38 7645ed8cef64 Extracting [=======> ] 25.07MB/159.1MB 23:12:38 6b11e56702ad Extracting [================================================> ] 7.471MB/7.707MB 23:12:38 6b11e56702ad Extracting [================================================> ] 7.471MB/7.707MB 23:12:38 6b11e56702ad Extracting [==================================================>] 7.707MB/7.707MB 23:12:38 6b11e56702ad Extracting [==================================================>] 7.707MB/7.707MB 23:12:38 7645ed8cef64 Extracting [============> ] 38.99MB/159.1MB 23:12:38 7645ed8cef64 Extracting [=================> ] 55.71MB/159.1MB 23:12:38 7645ed8cef64 Extracting [======================> ] 70.19MB/159.1MB 23:12:38 7645ed8cef64 Extracting [===========================> ] 88.01MB/159.1MB 23:12:39 7645ed8cef64 Extracting [===============================> ] 100.8MB/159.1MB 23:12:39 7645ed8cef64 Extracting [===================================> ] 112MB/159.1MB 23:12:39 7645ed8cef64 Extracting [===================================> ] 114.2MB/159.1MB 23:12:40 7645ed8cef64 Extracting [======================================> ] 122.6MB/159.1MB 23:12:40 7645ed8cef64 Extracting [===========================================> ] 138.1MB/159.1MB 23:12:40 7645ed8cef64 Extracting [================================================> ] 155.4MB/159.1MB 23:12:40 7645ed8cef64 Extracting [==================================================>] 159.1MB/159.1MB 23:12:43 6a634b17ba79 Pull complete 23:12:43 5d8ca4014ed0 Pull complete 23:12:43 f1fb904ca1b9 Pull complete 23:12:43 6b11e56702ad Pull complete 23:12:43 6b11e56702ad Pull complete 23:12:44 7645ed8cef64 Pull complete 23:12:45 1e12dd793eba Extracting [==================================================>] 721B/721B 23:12:45 1e12dd793eba Extracting [==================================================>] 721B/721B 23:12:49 c796e22f7138 Extracting [==================================================>] 301B/301B 23:12:49 c796e22f7138 Extracting [==================================================>] 301B/301B 23:12:55 2a0008f5c37f Extracting [==================================================>] 1.228kB/1.228kB 23:12:55 2a0008f5c37f Extracting [==================================================>] 1.228kB/1.228kB 23:12:55 1e12dd793eba Pull complete 23:12:55 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 23:12:55 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 23:12:55 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 23:12:55 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 23:12:55 c796e22f7138 Pull complete 23:12:55 8e028879fd2e Extracting [==================================================>] 1.149kB/1.149kB 23:12:55 8e028879fd2e Extracting [==================================================>] 1.149kB/1.149kB 23:12:58 2a0008f5c37f Pull complete 23:12:59 63558e6a3d29 Extracting [> ] 557.1kB/246.3MB 23:12:59 63558e6a3d29 Extracting [> ] 4.456MB/246.3MB 23:12:59 63558e6a3d29 Extracting [====> ] 22.28MB/246.3MB 23:12:59 63558e6a3d29 Extracting [======> ] 33.42MB/246.3MB 23:12:59 63558e6a3d29 Extracting [=========> ] 44.56MB/246.3MB 23:12:59 63558e6a3d29 Extracting [===========> ] 57.93MB/246.3MB 23:12:59 63558e6a3d29 Extracting [==============> ] 71.86MB/246.3MB 23:12:59 63558e6a3d29 Extracting [=================> ] 85.23MB/246.3MB 23:12:59 53d69aa7d3fc Pull complete 23:12:59 53d69aa7d3fc Pull complete 23:12:59 63558e6a3d29 Extracting [===================> ] 96.37MB/246.3MB 23:13:00 63558e6a3d29 Extracting [=====================> ] 108.1MB/246.3MB 23:13:00 63558e6a3d29 Extracting [========================> ] 120.3MB/246.3MB 23:13:00 63558e6a3d29 Extracting [===========================> ] 134.3MB/246.3MB 23:13:00 8e028879fd2e Pull complete 23:13:00 63558e6a3d29 Extracting [===========================> ] 137MB/246.3MB 23:13:00 63558e6a3d29 Extracting [============================> ] 139.3MB/246.3MB 23:13:00 63558e6a3d29 Extracting [==============================> ] 149.3MB/246.3MB 23:13:01 63558e6a3d29 Extracting [=================================> ] 166.6MB/246.3MB 23:13:01 fd153c39a15f Extracting [==================================================>] 1.123kB/1.123kB 23:13:01 fd153c39a15f Extracting [==================================================>] 1.123kB/1.123kB 23:13:01 63558e6a3d29 Extracting [==================================> ] 172.1MB/246.3MB 23:13:01 63558e6a3d29 Extracting [====================================> ] 181.6MB/246.3MB 23:13:01 63558e6a3d29 Extracting [=======================================> ] 192.7MB/246.3MB 23:13:01 63558e6a3d29 Extracting [=========================================> ] 206.7MB/246.3MB 23:13:01 63558e6a3d29 Extracting [============================================> ] 217.3MB/246.3MB 23:13:01 63558e6a3d29 Extracting [===============================================> ] 232.8MB/246.3MB 23:13:01 63558e6a3d29 Extracting [================================================> ] 239.5MB/246.3MB 23:13:01 63558e6a3d29 Extracting [==================================================>] 246.3MB/246.3MB 23:13:01 prometheus Pulled 23:13:02 a3ab11953ef9 Extracting [> ] 426kB/39.52MB 23:13:02 a3ab11953ef9 Extracting [> ] 426kB/39.52MB 23:13:02 a3ab11953ef9 Extracting [===========> ] 9.372MB/39.52MB 23:13:02 a3ab11953ef9 Extracting [===========> ] 9.372MB/39.52MB 23:13:02 a3ab11953ef9 Extracting [==============================> ] 24.28MB/39.52MB 23:13:02 a3ab11953ef9 Extracting [==============================> ] 24.28MB/39.52MB 23:13:03 a3ab11953ef9 Extracting [===========================================> ] 34.5MB/39.52MB 23:13:03 a3ab11953ef9 Extracting [===========================================> ] 34.5MB/39.52MB 23:13:03 a3ab11953ef9 Extracting [==================================================>] 39.52MB/39.52MB 23:13:03 a3ab11953ef9 Extracting [==================================================>] 39.52MB/39.52MB 23:13:03 grafana Pulled 23:13:03 fd153c39a15f Pull complete 23:13:03 63558e6a3d29 Pull complete 23:13:04 a3ab11953ef9 Pull complete 23:13:04 a3ab11953ef9 Pull complete 23:13:04 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 23:13:04 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 23:13:04 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 23:13:04 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 23:13:05 simulator Pulled 23:13:06 apex-pdp Pulled 23:13:06 91ef9543149d Pull complete 23:13:06 91ef9543149d Pull complete 23:13:07 2ec4f59af178 Extracting [==================================================>] 881B/881B 23:13:07 2ec4f59af178 Extracting [==================================================>] 881B/881B 23:13:07 2ec4f59af178 Extracting [==================================================>] 881B/881B 23:13:07 2ec4f59af178 Extracting [==================================================>] 881B/881B 23:13:09 2ec4f59af178 Pull complete 23:13:09 2ec4f59af178 Pull complete 23:13:10 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 23:13:10 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 23:13:10 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 23:13:10 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 23:13:11 8b7e81cd5ef1 Pull complete 23:13:11 8b7e81cd5ef1 Pull complete 23:13:11 c52916c1316e Extracting [==================================================>] 171B/171B 23:13:11 c52916c1316e Extracting [==================================================>] 171B/171B 23:13:11 c52916c1316e Extracting [==================================================>] 171B/171B 23:13:11 c52916c1316e Extracting [==================================================>] 171B/171B 23:13:11 c52916c1316e Pull complete 23:13:11 c52916c1316e Pull complete 23:13:12 d93f69e96600 Extracting [> ] 557.1kB/115.2MB 23:13:12 7a1cb9ad7f75 Extracting [> ] 557.1kB/115.2MB 23:13:12 d93f69e96600 Extracting [=======> ] 16.15MB/115.2MB 23:13:12 7a1cb9ad7f75 Extracting [====> ] 10.03MB/115.2MB 23:13:12 d93f69e96600 Extracting [============> ] 28.97MB/115.2MB 23:13:12 7a1cb9ad7f75 Extracting [========> ] 19.5MB/115.2MB 23:13:12 d93f69e96600 Extracting [==================> ] 42.89MB/115.2MB 23:13:12 7a1cb9ad7f75 Extracting [=============> ] 31.2MB/115.2MB 23:13:12 d93f69e96600 Extracting [========================> ] 56.82MB/115.2MB 23:13:12 7a1cb9ad7f75 Extracting [====================> ] 46.79MB/115.2MB 23:13:12 d93f69e96600 Extracting [=================================> ] 76.87MB/115.2MB 23:13:12 7a1cb9ad7f75 Extracting [============================> ] 65.73MB/115.2MB 23:13:12 d93f69e96600 Extracting [=========================================> ] 96.37MB/115.2MB 23:13:12 7a1cb9ad7f75 Extracting [=====================================> ] 86.9MB/115.2MB 23:13:12 d93f69e96600 Extracting [================================================> ] 110.9MB/115.2MB 23:13:12 7a1cb9ad7f75 Extracting [=============================================> ] 104.7MB/115.2MB 23:13:12 d93f69e96600 Extracting [==================================================>] 115.2MB/115.2MB 23:13:12 7a1cb9ad7f75 Extracting [=================================================> ] 113.1MB/115.2MB 23:13:12 7a1cb9ad7f75 Extracting [==================================================>] 115.2MB/115.2MB 23:13:12 d93f69e96600 Pull complete 23:13:12 bbb9d15c45a1 Extracting [==================================================>] 3.633kB/3.633kB 23:13:12 bbb9d15c45a1 Extracting [==================================================>] 3.633kB/3.633kB 23:13:13 7a1cb9ad7f75 Pull complete 23:13:13 0a92c7dea7af Extracting [==================================================>] 3.449kB/3.449kB 23:13:13 0a92c7dea7af Extracting [==================================================>] 3.449kB/3.449kB 23:13:13 bbb9d15c45a1 Pull complete 23:13:13 kafka Pulled 23:13:13 0a92c7dea7af Pull complete 23:13:13 zookeeper Pulled 23:13:13 Network compose_default Creating 23:13:13 Network compose_default Created 23:13:13 Container zookeeper Creating 23:13:13 Container prometheus Creating 23:13:13 Container simulator Creating 23:13:13 Container mariadb Creating 23:13:20 Container prometheus Created 23:13:20 Container grafana Creating 23:13:20 Container mariadb Created 23:13:20 Container simulator Created 23:13:20 Container policy-db-migrator Creating 23:13:20 Container zookeeper Created 23:13:20 Container kafka Creating 23:13:21 Container policy-db-migrator Created 23:13:21 Container policy-api Creating 23:13:21 Container kafka Created 23:13:21 Container grafana Created 23:13:21 Container policy-api Created 23:13:21 Container policy-pap Creating 23:13:21 Container policy-pap Created 23:13:21 Container policy-apex-pdp Creating 23:13:21 Container policy-apex-pdp Created 23:13:21 Container simulator Starting 23:13:21 Container mariadb Starting 23:13:21 Container prometheus Starting 23:13:21 Container zookeeper Starting 23:13:22 Container zookeeper Started 23:13:22 Container kafka Starting 23:13:23 Container kafka Started 23:13:23 Container prometheus Started 23:13:23 Container grafana Starting 23:13:24 Container grafana Started 23:13:24 Container mariadb Started 23:13:24 Container policy-db-migrator Starting 23:13:25 Container policy-db-migrator Started 23:13:25 Container policy-api Starting 23:13:26 Container policy-api Started 23:13:26 Container policy-pap Starting 23:13:26 Container simulator Started 23:13:27 Container policy-pap Started 23:13:27 Container policy-apex-pdp Starting 23:13:28 Container policy-apex-pdp Started 23:13:28 Prometheus server: http://localhost:30259 23:13:28 Grafana server: http://localhost:30269 23:13:38 Waiting for REST to come up on localhost port 30003... 23:13:38 NAMES STATUS 23:13:38 policy-apex-pdp Up 10 seconds 23:13:38 policy-pap Up 10 seconds 23:13:38 policy-api Up 12 seconds 23:13:38 kafka Up 15 seconds 23:13:38 grafana Up 14 seconds 23:13:38 simulator Up 12 seconds 23:13:38 mariadb Up 14 seconds 23:13:38 zookeeper Up 16 seconds 23:13:38 prometheus Up 15 seconds 23:13:43 NAMES STATUS 23:13:43 policy-apex-pdp Up 15 seconds 23:13:43 policy-pap Up 16 seconds 23:13:43 policy-api Up 17 seconds 23:13:43 kafka Up 20 seconds 23:13:43 grafana Up 19 seconds 23:13:43 simulator Up 17 seconds 23:13:43 mariadb Up 19 seconds 23:13:43 zookeeper Up 21 seconds 23:13:43 prometheus Up 20 seconds 23:13:49 NAMES STATUS 23:13:49 policy-apex-pdp Up 20 seconds 23:13:49 policy-pap Up 21 seconds 23:13:49 policy-api Up 22 seconds 23:13:49 kafka Up 25 seconds 23:13:49 grafana Up 24 seconds 23:13:49 simulator Up 22 seconds 23:13:49 mariadb Up 24 seconds 23:13:49 zookeeper Up 26 seconds 23:13:49 prometheus Up 25 seconds 23:13:54 NAMES STATUS 23:13:54 policy-apex-pdp Up 25 seconds 23:13:54 policy-pap Up 26 seconds 23:13:54 policy-api Up 27 seconds 23:13:54 kafka Up 30 seconds 23:13:54 grafana Up 29 seconds 23:13:54 simulator Up 27 seconds 23:13:54 mariadb Up 29 seconds 23:13:54 zookeeper Up 31 seconds 23:13:54 prometheus Up 30 seconds 23:13:59 NAMES STATUS 23:13:59 policy-apex-pdp Up 30 seconds 23:13:59 policy-pap Up 31 seconds 23:13:59 policy-api Up 32 seconds 23:13:59 kafka Up 35 seconds 23:13:59 grafana Up 34 seconds 23:13:59 simulator Up 32 seconds 23:13:59 mariadb Up 34 seconds 23:13:59 zookeeper Up 36 seconds 23:13:59 prometheus Up 35 seconds 23:14:04 NAMES STATUS 23:14:04 policy-apex-pdp Up 35 seconds 23:14:04 policy-pap Up 36 seconds 23:14:04 policy-api Up 37 seconds 23:14:04 kafka Up 40 seconds 23:14:04 grafana Up 39 seconds 23:14:04 simulator Up 37 seconds 23:14:04 mariadb Up 39 seconds 23:14:04 zookeeper Up 41 seconds 23:14:04 prometheus Up 40 seconds 23:14:04 Build docker image for robot framework 23:14:04 Error: No such image: policy-csit-robot 23:14:04 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/models'... 23:14:05 Build robot framework docker image 23:14:05 Sending build context to Docker daemon 16.16MB 23:14:05 Step 1/9 : FROM nexus3.onap.org:10001/library/python:3.10-slim-bullseye 23:14:05 3.10-slim-bullseye: Pulling from library/python 23:14:05 728328ac3bde: Pulling fs layer 23:14:05 1b1ca9b4dc3e: Pulling fs layer 23:14:05 87fd8cb1268a: Pulling fs layer 23:14:05 bc8f89fb7e32: Pulling fs layer 23:14:05 91dc9fb1162f: Pulling fs layer 23:14:05 91dc9fb1162f: Waiting 23:14:05 1b1ca9b4dc3e: Verifying Checksum 23:14:05 1b1ca9b4dc3e: Download complete 23:14:05 bc8f89fb7e32: Verifying Checksum 23:14:05 bc8f89fb7e32: Download complete 23:14:05 87fd8cb1268a: Verifying Checksum 23:14:05 87fd8cb1268a: Download complete 23:14:05 91dc9fb1162f: Verifying Checksum 23:14:05 91dc9fb1162f: Download complete 23:14:05 728328ac3bde: Verifying Checksum 23:14:05 728328ac3bde: Download complete 23:14:07 728328ac3bde: Pull complete 23:14:07 1b1ca9b4dc3e: Pull complete 23:14:07 87fd8cb1268a: Pull complete 23:14:07 bc8f89fb7e32: Pull complete 23:14:08 91dc9fb1162f: Pull complete 23:14:08 Digest: sha256:9745f361fffc367922210f2de48a58f44782ebf5a7375195e91ebd5b3ce5a8ff 23:14:08 Status: Downloaded newer image for nexus3.onap.org:10001/library/python:3.10-slim-bullseye 23:14:08 ---> 585a36762ff2 23:14:08 Step 2/9 : ARG CSIT_SCRIPT=${CSIT_SCRIPT} 23:14:10 ---> Running in 0e8274f8c5a8 23:14:10 Removing intermediate container 0e8274f8c5a8 23:14:10 ---> 7586f4229a46 23:14:10 Step 3/9 : ARG ROBOT_FILE=${ROBOT_FILE} 23:14:10 ---> Running in 2a7146abf949 23:14:10 Removing intermediate container 2a7146abf949 23:14:10 ---> 07cf661a2eac 23:14:10 Step 4/9 : ENV ROBOT_WORKSPACE=/opt/robotworkspace ROBOT_FILE=$ROBOT_FILE CLAMP_K8S_TEST=$CLAMP_K8S_TEST 23:14:10 ---> Running in 245efd9e67c7 23:14:10 Removing intermediate container 245efd9e67c7 23:14:10 ---> 11046183580e 23:14:10 Step 5/9 : RUN python3 -m pip -qq install --upgrade pip && python3 -m pip -qq install --upgrade --extra-index-url="https://nexus3.onap.org/repository/PyPi.staging/simple" 'robotframework-onap==0.6.0.*' --pre && python3 -m pip -qq install --upgrade confluent-kafka && python3 -m pip freeze 23:14:10 ---> Running in 5412a20554b6 23:14:23 bcrypt==4.1.3 23:14:23 certifi==2024.6.2 23:14:23 cffi==1.17.0rc1 23:14:23 charset-normalizer==3.3.2 23:14:23 confluent-kafka==2.4.0 23:14:23 cryptography==42.0.8 23:14:23 decorator==5.1.1 23:14:23 deepdiff==7.0.1 23:14:23 dnspython==2.6.1 23:14:23 future==1.0.0 23:14:23 idna==3.7 23:14:23 Jinja2==3.1.4 23:14:23 jsonpath-rw==1.4.0 23:14:23 kafka-python==2.0.2 23:14:23 MarkupSafe==2.1.5 23:14:23 more-itertools==5.0.0 23:14:23 ordered-set==4.1.0 23:14:23 paramiko==3.4.0 23:14:23 pbr==6.0.0 23:14:23 ply==3.11 23:14:23 protobuf==5.27.1 23:14:23 pycparser==2.22 23:14:23 PyNaCl==1.5.0 23:14:23 PyYAML==6.0.1 23:14:23 requests==2.32.3 23:14:23 robotframework==7.0.1rc1 23:14:23 robotframework-onap==0.6.0.dev105 23:14:23 robotframework-requests==1.0a11 23:14:23 robotlibcore-temp==1.0.2 23:14:23 six==1.16.0 23:14:23 urllib3==2.2.1 23:14:27 Removing intermediate container 5412a20554b6 23:14:27 ---> 5ab2510154d4 23:14:27 Step 6/9 : RUN mkdir -p ${ROBOT_WORKSPACE} 23:14:27 ---> Running in c231faee9f22 23:14:27 Removing intermediate container c231faee9f22 23:14:27 ---> 8e7ffeedc748 23:14:27 Step 7/9 : COPY scripts/run-test.sh tests/ ${ROBOT_WORKSPACE}/ 23:14:29 ---> 10aefb99543a 23:14:29 Step 8/9 : WORKDIR ${ROBOT_WORKSPACE} 23:14:29 ---> Running in 87eb7e103182 23:14:29 Removing intermediate container 87eb7e103182 23:14:29 ---> 29b7e102e34c 23:14:29 Step 9/9 : CMD ["sh", "-c", "./run-test.sh" ] 23:14:29 ---> Running in ed773ab0fe19 23:14:29 Removing intermediate container ed773ab0fe19 23:14:29 ---> d6292ff4693d 23:14:29 Successfully built d6292ff4693d 23:14:29 Successfully tagged policy-csit-robot:latest 23:14:32 top - 23:14:32 up 4 min, 0 users, load average: 3.75, 2.14, 0.86 23:14:32 Tasks: 208 total, 1 running, 130 sleeping, 0 stopped, 0 zombie 23:14:32 %Cpu(s): 13.8 us, 3.1 sy, 0.0 ni, 75.2 id, 7.7 wa, 0.0 hi, 0.1 si, 0.1 st 23:14:32 23:14:32 total used free shared buff/cache available 23:14:32 Mem: 31G 2.7G 22G 1.3M 6.5G 28G 23:14:32 Swap: 1.0G 0B 1.0G 23:14:32 23:14:32 NAMES STATUS 23:14:32 policy-apex-pdp Up About a minute 23:14:32 policy-pap Up About a minute 23:14:32 policy-api Up About a minute 23:14:32 kafka Up About a minute 23:14:32 grafana Up About a minute 23:14:32 simulator Up About a minute 23:14:32 mariadb Up About a minute 23:14:32 zookeeper Up About a minute 23:14:32 prometheus Up About a minute 23:14:32 23:14:35 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:14:35 5636e6f5b72d policy-apex-pdp 2.98% 173.7MiB / 31.41GiB 0.54% 25.8kB / 28.7kB 0B / 0B 49 23:14:35 30bb6abee7cf policy-pap 1.28% 505.9MiB / 31.41GiB 1.57% 110kB / 104kB 0B / 149MB 63 23:14:35 e8a59f65bc16 policy-api 0.09% 506.9MiB / 31.41GiB 1.58% 988kB / 647kB 0B / 8.19kB 53 23:14:35 2f38651c4372 kafka 2.80% 379.7MiB / 31.41GiB 1.18% 126kB / 126kB 0B / 541kB 85 23:14:35 1d5f55281297 grafana 0.05% 64.36MiB / 31.41GiB 0.20% 24.8kB / 4.82kB 0B / 25.7MB 18 23:14:35 e574b93b799e simulator 0.09% 120MiB / 31.41GiB 0.37% 1.34kB / 0B 225kB / 0B 77 23:14:35 621759e17d2b mariadb 0.02% 102.3MiB / 31.41GiB 0.32% 971kB / 1.22MB 11MB / 71.4MB 31 23:14:35 fffc8e05592c zookeeper 0.07% 97.01MiB / 31.41GiB 0.30% 54.5kB / 47.4kB 4.1kB / 381kB 61 23:14:35 6644e5112741 prometheus 0.00% 18.88MiB / 31.41GiB 0.06% 1.8kB / 474B 0B / 0B 12 23:14:35 23:14:35 Container policy-csit Creating 23:14:35 Container policy-csit Created 23:14:35 Attaching to policy-csit 23:14:36 policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot 23:14:36 policy-csit | Run Robot test 23:14:36 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 23:14:36 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 23:14:36 policy-csit | -v POLICY_API_IP:policy-api:6969 23:14:36 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 23:14:36 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 23:14:36 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 23:14:36 policy-csit | -v APEX_IP:policy-apex-pdp:6969 23:14:36 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 23:14:36 policy-csit | -v KAFKA_IP:kafka:9092 23:14:36 policy-csit | -v PROMETHEUS_IP:prometheus:9090 23:14:36 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 23:14:36 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 23:14:36 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 23:14:36 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 23:14:36 policy-csit | -v TEMP_FOLDER:/tmp/distribution 23:14:36 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 23:14:36 policy-csit | -v CLAMP_K8S_TEST: 23:14:36 policy-csit | Starting Robot test suites ... 23:14:36 policy-csit | ============================================================================== 23:14:36 policy-csit | Pap-Test & Pap-Slas 23:14:36 policy-csit | ============================================================================== 23:14:36 policy-csit | Pap-Test & Pap-Slas.Pap-Test 23:14:36 policy-csit | ============================================================================== 23:14:37 policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:14:37 policy-csit | ------------------------------------------------------------------------------ 23:14:38 policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:14:38 policy-csit | ------------------------------------------------------------------------------ 23:14:38 policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:14:38 policy-csit | ------------------------------------------------------------------------------ 23:14:39 policy-csit | Healthcheck :: Verify policy pap health check | PASS | 23:14:39 policy-csit | ------------------------------------------------------------------------------ 23:14:59 policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:14:59 policy-csit | ------------------------------------------------------------------------------ 23:14:59 policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:14:59 policy-csit | ------------------------------------------------------------------------------ 23:15:00 policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:15:00 policy-csit | ------------------------------------------------------------------------------ 23:15:00 policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:15:00 policy-csit | ------------------------------------------------------------------------------ 23:15:00 policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:15:00 policy-csit | ------------------------------------------------------------------------------ 23:15:00 policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:15:00 policy-csit | ------------------------------------------------------------------------------ 23:15:01 policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:15:01 policy-csit | ------------------------------------------------------------------------------ 23:15:01 policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:15:01 policy-csit | ------------------------------------------------------------------------------ 23:15:01 policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:15:01 policy-csit | ------------------------------------------------------------------------------ 23:15:01 policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:15:01 policy-csit | ------------------------------------------------------------------------------ 23:15:01 policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:15:01 policy-csit | ------------------------------------------------------------------------------ 23:15:02 policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:15:02 policy-csit | ------------------------------------------------------------------------------ 23:15:02 policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:15:02 policy-csit | ------------------------------------------------------------------------------ 23:15:02 policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:15:02 policy-csit | ------------------------------------------------------------------------------ 23:15:02 policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:15:02 policy-csit | ------------------------------------------------------------------------------ 23:15:02 policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:15:02 policy-csit | ------------------------------------------------------------------------------ 23:15:02 policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:15:02 policy-csit | ------------------------------------------------------------------------------ 23:15:02 policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:15:02 policy-csit | ------------------------------------------------------------------------------ 23:15:02 policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | 23:15:02 policy-csit | 22 tests, 22 passed, 0 failed 23:15:02 policy-csit | ============================================================================== 23:15:02 policy-csit | Pap-Test & Pap-Slas.Pap-Slas 23:15:02 policy-csit | ============================================================================== 23:16:02 policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:02 policy-csit | ------------------------------------------------------------------------------ 23:16:02 policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:02 policy-csit | ------------------------------------------------------------------------------ 23:16:02 policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:02 policy-csit | ------------------------------------------------------------------------------ 23:16:02 policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:02 policy-csit | ------------------------------------------------------------------------------ 23:16:02 policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:02 policy-csit | ------------------------------------------------------------------------------ 23:16:02 policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:02 policy-csit | ------------------------------------------------------------------------------ 23:16:02 policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:02 policy-csit | ------------------------------------------------------------------------------ 23:16:02 policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:02 policy-csit | ------------------------------------------------------------------------------ 23:16:02 policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | 23:16:02 policy-csit | 8 tests, 8 passed, 0 failed 23:16:02 policy-csit | ============================================================================== 23:16:02 policy-csit | Pap-Test & Pap-Slas | PASS | 23:16:02 policy-csit | 30 tests, 30 passed, 0 failed 23:16:02 policy-csit | ============================================================================== 23:16:02 policy-csit | Output: /tmp/results/output.xml 23:16:02 policy-csit | Log: /tmp/results/log.html 23:16:02 policy-csit | Report: /tmp/results/report.html 23:16:02 policy-csit | RESULT: 0 23:16:03 policy-csit exited with code 0 23:16:03 NAMES STATUS 23:16:03 policy-apex-pdp Up 2 minutes 23:16:03 policy-pap Up 2 minutes 23:16:03 policy-api Up 2 minutes 23:16:03 kafka Up 2 minutes 23:16:03 grafana Up 2 minutes 23:16:03 simulator Up 2 minutes 23:16:03 mariadb Up 2 minutes 23:16:03 zookeeper Up 2 minutes 23:16:03 prometheus Up 2 minutes 23:16:03 Shut down started! 23:16:04 Collecting logs from docker compose containers... 23:16:08 ======== Logs from grafana ======== 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.438912311Z level=info msg="Starting Grafana" version=11.0.0 commit=83b9528bce85cf9371320f6d6e450916156da3f6 branch=v11.0.x compiled=2024-06-06T23:13:24Z 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.43987863Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.439976361Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.440088492Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.440163042Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.440245073Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.440323774Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.440385634Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.440472345Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.440544046Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.440669817Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.440803958Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.440923769Z level=info msg=Target target=[all] 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.441049611Z level=info msg="Path Home" path=/usr/share/grafana 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.441123001Z level=info msg="Path Data" path=/var/lib/grafana 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.441188542Z level=info msg="Path Logs" path=/var/log/grafana 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.441284653Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.441343073Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:16:08 grafana | logger=settings t=2024-06-06T23:13:24.441382354Z level=info msg="App mode production" 23:16:08 grafana | logger=sqlstore t=2024-06-06T23:13:24.441952579Z level=info msg="Connecting to DB" dbtype=sqlite3 23:16:08 grafana | logger=sqlstore t=2024-06-06T23:13:24.44207993Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.445073027Z level=info msg="Locking database" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.445142458Z level=info msg="Starting DB migrations" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.445759694Z level=info msg="Executing migration" id="create migration_log table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.446686312Z level=info msg="Migration successfully executed" id="create migration_log table" duration=926.618µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.450385176Z level=info msg="Executing migration" id="create user table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.451025362Z level=info msg="Migration successfully executed" id="create user table" duration=640.336µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.456545513Z level=info msg="Executing migration" id="add unique index user.login" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.457449631Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=904.058µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.461242336Z level=info msg="Executing migration" id="add unique index user.email" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.462675989Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.434223ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.466522564Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.467761556Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.239002ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.473405498Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.474192865Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=787.467µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.477624706Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.480042778Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.416182ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.48343315Z level=info msg="Executing migration" id="create user table v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.484343828Z level=info msg="Migration successfully executed" id="create user table v2" duration=910.498µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.487405546Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.488265944Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=860.198µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.4943213Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.495738993Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.417234ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.528679975Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.529058158Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=375.183µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.534542269Z level=info msg="Executing migration" id="Drop old table user_v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.535100504Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=557.355µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.544413119Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.545498589Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.08684ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.549234234Z level=info msg="Executing migration" id="Update user table charset" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.549259414Z level=info msg="Migration successfully executed" id="Update user table charset" duration=26.32µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.556654852Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.557962264Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.307102ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.562503125Z level=info msg="Executing migration" id="Add missing user data" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.562942899Z level=info msg="Migration successfully executed" id="Add missing user data" duration=418.074µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.568776083Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.569777022Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=999.139µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.572655989Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.573468116Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=811.907µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.576730346Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.577867476Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.14158ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.583474448Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.592143157Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.665609ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.599242273Z level=info msg="Executing migration" id="Add uid column to user" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.600115071Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=873.348µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.602912916Z level=info msg="Executing migration" id="Update uid column values for users" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.603107218Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=194.792µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.605734952Z level=info msg="Executing migration" id="Add unique index user_uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.606421779Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=686.716µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.611091441Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.611387974Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=296.833µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.61420026Z level=info msg="Executing migration" id="update login and email fields to lowercase" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.614549783Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=349.723µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.617763003Z level=info msg="Executing migration" id="create temp user table v1-7" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.619052184Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.289811ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.669064784Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.670512757Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.453144ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.674920737Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.676172759Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.254392ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.685891328Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.686837787Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=953.359µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.690137997Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.691219437Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.08146ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.695239994Z level=info msg="Executing migration" id="Update temp_user table charset" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.695378275Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=138.971µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.698415523Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.699353702Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=940.579µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.702249258Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.702959915Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=710.417µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.705978422Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.70680742Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=829.758µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.709634936Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.710407043Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=773.517µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.713640873Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.716818912Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.179409ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.719771669Z level=info msg="Executing migration" id="create temp_user v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.720836439Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.0644ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.730393186Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.731280985Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=905.769µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.73407922Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.734883808Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=801.148µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.738547741Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.739260248Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=714.717µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.742982262Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.743736289Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=754.317µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.746201692Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.746666196Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=494.205µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.749409841Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.750044837Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=634.726µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.754320356Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.755060803Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=742.087µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.800983884Z level=info msg="Executing migration" id="create star table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.802131665Z level=info msg="Migration successfully executed" id="create star table" duration=1.146361ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.805201723Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.8059313Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=729.397µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.811042877Z level=info msg="Executing migration" id="create org table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.811768213Z level=info msg="Migration successfully executed" id="create org table v1" duration=725.466µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.814836162Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.815570968Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=732.426µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.821316671Z level=info msg="Executing migration" id="create org_user table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.822028298Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=730.587µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.827284566Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.827828261Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=543.725µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.8310595Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.831602515Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=542.755µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.834737034Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.835273199Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=536.095µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.840562888Z level=info msg="Executing migration" id="Update org table charset" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.840590638Z level=info msg="Migration successfully executed" id="Update org table charset" duration=27.34µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.843116441Z level=info msg="Executing migration" id="Update org_user table charset" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.843137091Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=20.67µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.845556944Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.845705075Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=145.411µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.847732964Z level=info msg="Executing migration" id="create dashboard table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.848299069Z level=info msg="Migration successfully executed" id="create dashboard table" duration=565.916µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.851287366Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.851843511Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=555.615µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.854505896Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.855047561Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=541.185µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.857718895Z level=info msg="Executing migration" id="create dashboard_tag table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.858187339Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=467.974µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.860895914Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.861443559Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=546.695µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.864925491Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.865428916Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=505.645µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.86810013Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.871643953Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=3.543193ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.874833692Z level=info msg="Executing migration" id="create dashboard v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.875356757Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=523.105µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.879297003Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.879825148Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=527.345µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.882939647Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.883458691Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=519.464µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.887838432Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.888078394Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=240.222µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.89205293Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.892567675Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=514.775µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.928675716Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.928726947Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=51.411µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.93231234Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.933548981Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.236721ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.938111173Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.939353434Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.243581ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.942785706Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.944048308Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.262422ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.947290897Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.947837752Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=546.415µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.952351854Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.953608885Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.255121ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.956995546Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.958257328Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.261382ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.963088722Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.964404114Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.315042ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.968128289Z level=info msg="Executing migration" id="Update dashboard table charset" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.968158679Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=113.671µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.972007144Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.972043845Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=37.531µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.975962661Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.979834596Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.870645ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.986033053Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.988724058Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.690485ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.992059258Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.993944766Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.885108ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.997150605Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:24.999011512Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.863907ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.004798476Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.005017028Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=218.672µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.011774012Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.01262065Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=846.018µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.018853069Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.0200576Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.204661ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.058414141Z level=info msg="Executing migration" id="Update dashboard title length" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.058490022Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=77.401µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.066504087Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.06785955Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.354963ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.072502924Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.073609274Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.09123ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.07843912Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.083956782Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.517432ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.088621676Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.089328572Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=706.706µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.09436365Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.095707402Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.342502ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.099302166Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.101033243Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.729557ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.106234522Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.106556615Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=321.813µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.11035954Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.111186188Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=825.638µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.117516958Z level=info msg="Executing migration" id="Add check_sum column" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.12093375Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.418932ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.127365221Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.128252929Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=887.348µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.132398128Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.1325925Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=193.932µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.137414495Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.137642297Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=227.882µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.144476562Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.14531105Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=834.278µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.149000114Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.151173925Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.173211ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.157788047Z level=info msg="Executing migration" id="create data_source table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.158684066Z level=info msg="Migration successfully executed" id="create data_source table" duration=896.349µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.222082663Z level=info msg="Executing migration" id="add index data_source.account_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.22392714Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.846147ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.227520854Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.228407292Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=885.298µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.232126527Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.232905895Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=779.508µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.238940431Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.240263874Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.323723ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.245026459Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.253784911Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=8.759462ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.260100041Z level=info msg="Executing migration" id="create data_source table v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.261222991Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.1207ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.269471699Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.271228105Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.755516ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.277177031Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.278007369Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=829.598µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.281758665Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.282273709Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=514.994µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.287828402Z level=info msg="Executing migration" id="Add column with_credentials" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.290061083Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.232661ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.293915029Z level=info msg="Executing migration" id="Add secure json data column" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.29617279Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.257611ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.303313548Z level=info msg="Executing migration" id="Update data_source table charset" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.303338608Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=26.01µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.31098872Z level=info msg="Executing migration" id="Update initial version to 1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.311343203Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=354.843µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.339437398Z level=info msg="Executing migration" id="Add read_only data column" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.34184094Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.404552ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.346129401Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.346337253Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=208.032µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.353400829Z level=info msg="Executing migration" id="Update json_data with nulls" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.353593841Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=193.492µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.365753716Z level=info msg="Executing migration" id="Add uid column" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.368247779Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.496023ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.371750982Z level=info msg="Executing migration" id="Update uid value" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.371982394Z level=info msg="Migration successfully executed" id="Update uid value" duration=227.992µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.377748549Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.379183382Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.433253ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.384908766Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.387122927Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=2.214491ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.393605228Z level=info msg="Executing migration" id="create api_key table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.394777249Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.173261ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.40017584Z level=info msg="Executing migration" id="add index api_key.account_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.401172509Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=996.689µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.409616689Z level=info msg="Executing migration" id="add index api_key.key" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.410484927Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=868.198µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.414657066Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.416256861Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.599335ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.421628082Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.422392289Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=764.597µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.43207947Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.432871598Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=792.188µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.437632373Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.43839997Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=767.487µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.472191798Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.479429676Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=7.240248ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.483474174Z level=info msg="Executing migration" id="create api_key table v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.484251002Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=777.148µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.48834484Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.48938786Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.04071ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.496450526Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.497256324Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=806.278µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.504686874Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.505586503Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=903.468µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.512920672Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.51382847Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=911.478µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.52017301Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.520898407Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=730.197µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.525830623Z level=info msg="Executing migration" id="Update api_key table charset" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.525967304Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=141.391µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.534439254Z level=info msg="Executing migration" id="Add expires to api_key table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.537679255Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.239451ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.545451378Z level=info msg="Executing migration" id="Add service account foreign key" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.552121841Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=6.676053ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.558789014Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.558999136Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=210.611µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.564890121Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.568623566Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.736385ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.614124675Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.62320654Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=9.081965ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.643144458Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.643866125Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=717.107µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.64973623Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.650126664Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=390.444µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.652633577Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.653199713Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=566.086µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.656048109Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.656637865Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=589.416µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.663865653Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.664448139Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=582.066µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.667126074Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.667665359Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=539.045µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.670804928Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.670879439Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=72.761µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.673829927Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.673881817Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=33.31µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.682489128Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.685410376Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.920368ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.690154441Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.693863006Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.711075ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.697709802Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.697775342Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=66.33µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.704119382Z level=info msg="Executing migration" id="create quota table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.734516798Z level=info msg="Migration successfully executed" id="create quota table v1" duration=30.397516ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.75268739Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.754424146Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.736096ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.761438862Z level=info msg="Executing migration" id="Update quota table charset" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.761465382Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=27.61µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.764214498Z level=info msg="Executing migration" id="create plugin_setting table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.765103596Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=888.508µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.770569238Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.771168844Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=599.606µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.77501431Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.778131869Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.117819ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.784042765Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.784113606Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=72.341µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.791760718Z level=info msg="Executing migration" id="create session table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.792661876Z level=info msg="Migration successfully executed" id="create session table" duration=901.018µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.796655154Z level=info msg="Executing migration" id="Drop old table playlist table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.796742294Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=85.86µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.801120596Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.801203906Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=83.9µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.807455735Z level=info msg="Executing migration" id="create playlist table v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.808187242Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=731.427µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.812044479Z level=info msg="Executing migration" id="create playlist item table v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.812987627Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=945.968µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.818090455Z level=info msg="Executing migration" id="Update playlist table charset" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.818111566Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=21.991µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.82170292Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.82172463Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=22.441µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.829801796Z level=info msg="Executing migration" id="Add playlist column created_at" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.832171908Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.369902ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.836887063Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.839182614Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.294112ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.896609985Z level=info msg="Executing migration" id="drop preferences table v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.896988139Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=375.713µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.903781442Z level=info msg="Executing migration" id="drop preferences table v3" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.903963144Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=182.632µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.909419536Z level=info msg="Executing migration" id="create preferences table v3" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.910657507Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.239711ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.914603734Z level=info msg="Executing migration" id="Update preferences table charset" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.914628665Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=25.471µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.91946195Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.92369112Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.22725ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.931874607Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.932092079Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=220.432µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.936751163Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.940154195Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.404082ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.944155373Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.946980989Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.824926ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.9523693Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.952464051Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=95.781µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.956285887Z level=info msg="Executing migration" id="Add preferences index org_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.957127105Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=840.818µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.961763469Z level=info msg="Executing migration" id="Add preferences index user_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.962501945Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=737.626µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.968090958Z level=info msg="Executing migration" id="create alert table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.969056407Z level=info msg="Migration successfully executed" id="create alert table v1" duration=964.069µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.975649349Z level=info msg="Executing migration" id="add index alert org_id & id " 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.976454337Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=805.008µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.985419821Z level=info msg="Executing migration" id="add index alert state" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:25.986142848Z level=info msg="Migration successfully executed" id="add index alert state" duration=723.867µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.043328277Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.044897912Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.573065ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.049129551Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.049669535Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=540.344µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.055438798Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.056092694Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=653.296µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.061527134Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.06216485Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=635.276µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.065647622Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.073148581Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=7.499719ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.078162697Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.078754432Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=592.425µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.082233114Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.083146163Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=912.829µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.088405561Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.089180208Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=773.707µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.096342464Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.097259842Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=917.388µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.104157596Z level=info msg="Executing migration" id="create alert_notification table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.105885051Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.727285ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.109515905Z level=info msg="Executing migration" id="Add column is_default" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.113575752Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.037527ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.12094808Z level=info msg="Executing migration" id="Add column frequency" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.124558353Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.609883ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.129061364Z level=info msg="Executing migration" id="Add column send_reminder" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.133537975Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.475531ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.194226602Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.198249119Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.018467ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.201999304Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.202925442Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=926.118µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.215904761Z level=info msg="Executing migration" id="Update alert table charset" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.215928232Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=24.671µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.221336511Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.221357841Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=23.81µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.226068135Z level=info msg="Executing migration" id="create notification_journal table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.22666198Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=593.625µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.231790057Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.232730156Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=938.339µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.242227873Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.243300103Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.07244ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.247854645Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.24844449Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=589.665µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.253271314Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.254159313Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=887.738µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.260577891Z level=info msg="Executing migration" id="Add for to alert table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.264357996Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.779365ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.268459204Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.272187678Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.728124ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.280513324Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.280800057Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=287.533µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.344512622Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.346086376Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.573454ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.351339325Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.352562926Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.224012ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.359075736Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.36281925Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.742894ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.368948096Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.369109288Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=160.882µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.374348556Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.375408365Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.058709ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.381175698Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.382725013Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.552005ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.391567424Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.391797176Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=229.812µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.39767364Z level=info msg="Executing migration" id="create annotation table v5" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.39873246Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.0607ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.403565364Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.40427873Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=713.256µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.410665339Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.411586098Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=941.519µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.415975808Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.417408481Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.317172ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.423315615Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.424252784Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=936.899µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.432112136Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.433298077Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.185011ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.438821208Z level=info msg="Executing migration" id="Update annotation table charset" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.438847998Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=30.481µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.47609828Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.484133503Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=8.033003ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.49032019Z level=info msg="Executing migration" id="Drop category_id index" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.490979176Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=659.416µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.493703831Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.496628748Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=2.924927ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.500839407Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.501431352Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=590.855µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.506253616Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.507155685Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=902.179µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.51539568Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.5164041Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.00795ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.52411758Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.53715234Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=13.03538ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.540586062Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.541200307Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=614.215µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.544849431Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.546149953Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.300372ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.552846544Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.553203237Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=356.503µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.557591648Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.55891753Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=1.327442ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.564516761Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.564843084Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=327.433µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.571611506Z level=info msg="Executing migration" id="Add created time to annotation table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.575875856Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.26391ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.618714689Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.624883495Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=6.173416ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.628174726Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.629138644Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=964.538µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.633267712Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.634178881Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=914.549µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.638591981Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.638847684Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=257.083µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.643839968Z level=info msg="Executing migration" id="Add epoch_end column" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.648288959Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.449201ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.652365747Z level=info msg="Executing migration" id="Add index for epoch_end" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.653208414Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=842.597µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.660548242Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.660746304Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=196.992µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.665959471Z level=info msg="Executing migration" id="Move region to single row" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.666661768Z level=info msg="Migration successfully executed" id="Move region to single row" duration=702.747µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.67346057Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.674441519Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=981.989µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.679673857Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.680671106Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.005059ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.690594478Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.692634956Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=2.041128ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.699346818Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.70069602Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.349312ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.705769787Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.707094849Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.334592ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.71370562Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.715011162Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.305302ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.731410412Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.731618824Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=212.472µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.741581156Z level=info msg="Executing migration" id="create test_data table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.742824507Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.244762ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.754390163Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.755322972Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=936.529µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.762106854Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.763147234Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.041159ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.767325202Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.768422802Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.09518ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.774553948Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.774875441Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=324.113µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.78131806Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.781907496Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=589.626µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.787513577Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.787632008Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=119.971µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.792514803Z level=info msg="Executing migration" id="create team table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.793495252Z level=info msg="Migration successfully executed" id="create team table" duration=980.789µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.801673227Z level=info msg="Executing migration" id="add index team.org_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.803813927Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=2.14307ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.809042935Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.810849001Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.797086ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.818002537Z level=info msg="Executing migration" id="Add column uid in team" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.823671679Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.674542ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.826978019Z level=info msg="Executing migration" id="Update uid column values in team" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.827453624Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=475.395µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.832235428Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.833295317Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.059759ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.837361055Z level=info msg="Executing migration" id="create team member table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.838413084Z level=info msg="Migration successfully executed" id="create team member table" duration=1.051619ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.844154657Z level=info msg="Executing migration" id="add index team_member.org_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.847249405Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=3.094358ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.855096668Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.856341889Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.245231ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.862656637Z level=info msg="Executing migration" id="add index team_member.team_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.864818857Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=2.1627ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.90218928Z level=info msg="Executing migration" id="Add column email to team table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.909444166Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=7.254126ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.915937076Z level=info msg="Executing migration" id="Add column external to team_member table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.919091835Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.152499ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.922058472Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.926471823Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.412711ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.929528511Z level=info msg="Executing migration" id="create dashboard acl table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.930390949Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=862.258µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.936061231Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.93705129Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=988.789µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.940454411Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.942039346Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.585265ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.950365612Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.951312351Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=946.409µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.954670862Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.956145305Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.477414ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.959373855Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.960821498Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.447083ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.964102968Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.965003456Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=899.938µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.970254285Z level=info msg="Executing migration" id="add index dashboard_permission" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.971857039Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.601824ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.975503953Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.97634042Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=835.787µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.986208551Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.986626945Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=420.914µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.990891234Z level=info msg="Executing migration" id="create tag table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.991854903Z level=info msg="Migration successfully executed" id="create tag table" duration=962.579µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.995510886Z level=info msg="Executing migration" id="add index tag.key_value" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:26.996435625Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=924.629µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.000527382Z level=info msg="Executing migration" id="create login attempt table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.001269189Z level=info msg="Migration successfully executed" id="create login attempt table" duration=741.597µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.003955624Z level=info msg="Executing migration" id="add index login_attempt.username" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.004879572Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=923.438µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.007749108Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.008625906Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=874.478µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.012742662Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.027447154Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.702302ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.03032862Z level=info msg="Executing migration" id="create login_attempt v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.030841734Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=515.174µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.033551048Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.034174554Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=622.676µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.039124568Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.039413321Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=288.833µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.042792171Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.043433497Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=641.506µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.046511364Z level=info msg="Executing migration" id="create user auth table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.047288021Z level=info msg="Migration successfully executed" id="create user auth table" duration=777.107µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.05273788Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.053692299Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=954.039µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.056989108Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.057053979Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=65.901µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.060112616Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.065429084Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.316187ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.068329799Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.073514896Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.184857ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.077735184Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.08298273Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.247276ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.08630939Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.091755489Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.448859ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.094833706Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.095853896Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.02071ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.100763079Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.106888724Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=6.122185ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.12316913Z level=info msg="Executing migration" id="create server_lock table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.124822845Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.653144ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.134995215Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.135957824Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=962.209µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.148589387Z level=info msg="Executing migration" id="create user auth token table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.149516475Z level=info msg="Migration successfully executed" id="create user auth token table" duration=923.958µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.16678781Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.167799279Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.018139ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.180656324Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.182175827Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.519853ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.194007213Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.19482682Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=820.287µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.207615705Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.212416108Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=4.801133ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.226896507Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.22838369Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.487783ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.243463225Z level=info msg="Executing migration" id="create cache_data table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.246602493Z level=info msg="Migration successfully executed" id="create cache_data table" duration=3.146028ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.254700716Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.255579494Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=882.718µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.262628807Z level=info msg="Executing migration" id="create short_url table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.263759327Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.11429ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.270430346Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.271695658Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.266262ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.283470053Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.283576994Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=111.871µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.290457005Z level=info msg="Executing migration" id="delete alert_definition table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.290576576Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=120.891µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.295703752Z level=info msg="Executing migration" id="recreate alert_definition table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.296872573Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.169001ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.305959664Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.306867312Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=910.938µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.309774178Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.311257261Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.483063ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.315775362Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.315886143Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=111.861µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.322190349Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.322897945Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=709.386µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.327641048Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.328462745Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=821.077µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.335157455Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.336238465Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.08098ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.343314948Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.344524549Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.212581ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.349238211Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.355102673Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.864192ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.359590493Z level=info msg="Executing migration" id="drop alert_definition table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.360601053Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.01026ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.367776157Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.367899158Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=123.851µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.374674368Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.375642187Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=967.709µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.385926149Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.387280911Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.355532ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.392947032Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.394424685Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.476973ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.404301493Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.404511015Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=215.392µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.409220887Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.410149396Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=928.228µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.418398749Z level=info msg="Executing migration" id="create alert_instance table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.419367038Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=968.059µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.424478044Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.425546913Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.068839ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.429350637Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.430254555Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=903.278µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.44087005Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.448139625Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=7.272315ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.453720045Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.454616983Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=898.228µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.460793048Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.461669636Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=876.588µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.468396966Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.492438291Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=24.050015ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.495793501Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.516341075Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=20.546794ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.519679725Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.520668854Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=988.839µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.524948212Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.52586019Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=911.708µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.529062209Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.534575978Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.513139ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.538008679Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.543528768Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.519889ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.549026897Z level=info msg="Executing migration" id="create alert_rule table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.549944975Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=918.328µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.554274744Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.555201902Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=926.548µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.558953386Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.559904414Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=950.628µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.56495843Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.566507213Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.545123ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.570511089Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.57057213Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=61.721µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.574148802Z level=info msg="Executing migration" id="add column for to alert_rule" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.582278944Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=8.128802ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.58732352Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.593349083Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.022643ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.596904925Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.605947656Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=9.047901ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.609563118Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.610244194Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=682.766µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.615707173Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.616838163Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.13201ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.620334695Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.629216644Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=8.882299ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.633047688Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.637392657Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.340449ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.641695836Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.642732715Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.036569ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.646376808Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.652595623Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.218635ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.655895553Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.663549631Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=7.652818ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.668033831Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.668103272Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=69.871µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.67127954Z level=info msg="Executing migration" id="create alert_rule_version table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.67241863Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.13888ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.676011842Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.677038322Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.02299ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.681574292Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.682643682Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.0691ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.686355365Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.686426676Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=74.931µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.691286029Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.697321263Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.037804ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.704358876Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.710562231Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.203125ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.714375405Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.720873604Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.497889ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.724429405Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.728744484Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.315149ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.733498586Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.740098635Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.599289ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.74393692Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.744018071Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=84.52µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.74727286Z level=info msg="Executing migration" id=create_alert_configuration_table 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.748366109Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.092249ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.751714209Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.758305048Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.590049ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.762536896Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.762643907Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=107.591µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.766011757Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.772652807Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.639649ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.776081327Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.777076736Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=995.279µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.781247333Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.787957063Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.70946ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.791335494Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.792169441Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=833.637µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.795241938Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.796277328Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.03527ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.800335314Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.808490837Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=8.153773ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.812596554Z level=info msg="Executing migration" id="create provenance_type table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.813207769Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=612.155µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.818435146Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.819168082Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=729.966µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.828689088Z level=info msg="Executing migration" id="create alert_image table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.832615013Z level=info msg="Migration successfully executed" id="create alert_image table" duration=3.927986ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.838619056Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.839755226Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.1363ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.843756772Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.843853873Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=96.901µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.848656296Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.849756316Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.09988ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.855014723Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.856092343Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.07806ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.859801246Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.86029413Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.86367654Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.864141544Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=464.654µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.86816834Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.869924276Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.756076ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.873445888Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.88046161Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=7.014882ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.884062773Z level=info msg="Executing migration" id="create library_element table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.88484523Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=782.347µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.8904821Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.892211405Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.736345ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.895662966Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.897042149Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.379433ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.902100574Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.90394388Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.842556ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.909263398Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.910332497Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.068559ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.914172922Z level=info msg="Executing migration" id="increase max description length to 2048" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.914202112Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=29.88µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.917680533Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.917749584Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=69.871µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.922807829Z level=info msg="Executing migration" id="add library_element folder uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.929751531Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=6.939142ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.934480683Z level=info msg="Executing migration" id="populate library_element folder_uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.934810456Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=329.733µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.937944434Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.938751952Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=807.518µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.942570176Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.942845048Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=272.162µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.949402207Z level=info msg="Executing migration" id="create data_keys table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.950679288Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.277071ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.958611959Z level=info msg="Executing migration" id="create secrets table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.959222895Z level=info msg="Migration successfully executed" id="create secrets table" duration=608.126µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.964213799Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:27.998634247Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=34.417858ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.007632481Z level=info msg="Executing migration" id="add name column into data_keys" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.012777299Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.145458ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.016870268Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.016986709Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=116.911µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.020189799Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.060762761Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=40.568302ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.067517235Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.096191915Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=28.67302ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.102651916Z level=info msg="Executing migration" id="create kv_store table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.103288002Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=628.906µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.110021495Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.111075235Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.05348ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.118594976Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.118809068Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=214.262µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.121977478Z level=info msg="Executing migration" id="create permission table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.122776945Z level=info msg="Migration successfully executed" id="create permission table" duration=799.637µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.125973036Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.1274819Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.509385ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.130776421Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.13178438Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.008259ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.135947039Z level=info msg="Executing migration" id="create role table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.136865058Z level=info msg="Migration successfully executed" id="create role table" duration=917.389µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.140821195Z level=info msg="Executing migration" id="add column display_name" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.147995493Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.174258ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.151550336Z level=info msg="Executing migration" id="add column group_name" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.158776684Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.223898ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.163860122Z level=info msg="Executing migration" id="add index role.org_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.164585049Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=724.797µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.167618968Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.168360065Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=738.437µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.171198191Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.172198071Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=999.61µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.178237018Z level=info msg="Executing migration" id="create team role table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.179105496Z level=info msg="Migration successfully executed" id="create team role table" duration=868.588µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.182199105Z level=info msg="Executing migration" id="add index team_role.org_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.183169164Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=969.929µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.18808267Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.18914782Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.06438ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.194261339Z level=info msg="Executing migration" id="add index team_role.team_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.194992355Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=731.236µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.198343947Z level=info msg="Executing migration" id="create user role table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.198932223Z level=info msg="Migration successfully executed" id="create user role table" duration=588.446µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.206021519Z level=info msg="Executing migration" id="add index user_role.org_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.206919838Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=898.609µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.212868864Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.213983014Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.11338ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.217119554Z level=info msg="Executing migration" id="add index user_role.user_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.218243524Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.12418ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.222848448Z level=info msg="Executing migration" id="create builtin role table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.224233861Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.385493ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.229287488Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.230316538Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.02911ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.23364594Z level=info msg="Executing migration" id="add index builtin_role.name" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.234653989Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.00829ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.238551916Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.246310619Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.757863ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.250578039Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.251582378Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.004029ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.256631596Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.257657546Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.02588ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.260627914Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.261575123Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=947.319µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.266276627Z level=info msg="Executing migration" id="add unique index role.uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.267286416Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.017029ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.270935521Z level=info msg="Executing migration" id="create seed assignment table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.271706528Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=771.047µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.275900117Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.276692865Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=792.678µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.282448289Z level=info msg="Executing migration" id="add column hidden to role table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.290046011Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.596982ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.294090759Z level=info msg="Executing migration" id="permission kind migration" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.300242087Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.150058ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.30484856Z level=info msg="Executing migration" id="permission attribute migration" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.312988037Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.131457ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.317295467Z level=info msg="Executing migration" id="permission identifier migration" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.323348624Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=6.052817ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.329531923Z level=info msg="Executing migration" id="add permission identifier index" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.330734374Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.202352ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.336061934Z level=info msg="Executing migration" id="add permission action scope role_id index" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.337098254Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.03593ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.342113501Z level=info msg="Executing migration" id="remove permission role_id action scope index" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.343776317Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.663036ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.349499281Z level=info msg="Executing migration" id="create query_history table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.35049698Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.00235ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.354236385Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.355395636Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.158561ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.359412534Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.359462854Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=51.4µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.365524541Z level=info msg="Executing migration" id="rbac disabled migrator" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.365559452Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=35.961µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.367772703Z level=info msg="Executing migration" id="teams permissions migration" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.368124406Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=351.933µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.372211214Z level=info msg="Executing migration" id="dashboard permissions" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.372628558Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=417.954µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.375403785Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.375868839Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=465.325µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.38127779Z level=info msg="Executing migration" id="drop managed folder create actions" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.381630983Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=354.403µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.384910494Z level=info msg="Executing migration" id="alerting notification permissions" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.385628901Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=719.077µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.388982232Z level=info msg="Executing migration" id="create query_history_star table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.389612688Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=632.316µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.392698117Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.393500275Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=802.428µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.401578651Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.408390775Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=6.812134ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.411292442Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.411346433Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=55.041µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.414760925Z level=info msg="Executing migration" id="create correlation table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.415789965Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.02856ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.426717858Z level=info msg="Executing migration" id="add index correlations.uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.427786008Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.07279ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.431496363Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.432588303Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.09196ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.437154066Z level=info msg="Executing migration" id="add correlation config column" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.445363053Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.205597ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.449413251Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.451494501Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=2.08611ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.455180906Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.456389477Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.208971ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.462381474Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.480312032Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=17.930398ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.486518761Z level=info msg="Executing migration" id="create correlation v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.487591101Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.07152ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.496602136Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.498636805Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=2.057379ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.505328458Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.506381498Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.05807ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.51086993Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.511656678Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=781.238µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.515764536Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.515945858Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=181.482µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.519428151Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.520202868Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=774.677µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.52466648Z level=info msg="Executing migration" id="add provisioning column" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.532867657Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.198117ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.536386501Z level=info msg="Executing migration" id="create entity_events table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.536982486Z level=info msg="Migration successfully executed" id="create entity_events table" duration=596.715µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.542493098Z level=info msg="Executing migration" id="create dashboard public config v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.543295756Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=802.348µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.547614196Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.548080681Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.551839156Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.55228327Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.55646869Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.557203137Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=734.837µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.561418536Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.562112623Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=693.967µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.564666577Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.565429534Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=762.867µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.570708854Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.571816404Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.10223ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.576172185Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.5777265Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.554785ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.583228332Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.584810467Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.582375ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.589171468Z level=info msg="Executing migration" id="Drop public config table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.590331249Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.160481ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.594917242Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.596074073Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.151631ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.599968729Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.60111274Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.144031ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.605288889Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.606331199Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.04399ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.610003574Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.611107664Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.10396ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.614773509Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.637514743Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=22.740914ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.641774943Z level=info msg="Executing migration" id="add annotations_enabled column" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.650090501Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.315378ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.653866927Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.662148445Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.280698ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.668489765Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.668713997Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=228.082µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.674268969Z level=info msg="Executing migration" id="add share column" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.686486224Z level=info msg="Migration successfully executed" id="add share column" duration=12.213455ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.689779085Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.689907416Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=126.441µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.694209417Z level=info msg="Executing migration" id="create file table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.695186186Z level=info msg="Migration successfully executed" id="create file table" duration=976.879µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.701075822Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.702610036Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.534395ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.71256551Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.714405607Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.843757ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.722668195Z level=info msg="Executing migration" id="create file_meta table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.723496963Z level=info msg="Migration successfully executed" id="create file_meta table" duration=828.698µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.729411768Z level=info msg="Executing migration" id="file table idx: path key" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.731252506Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.839448ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.736415654Z level=info msg="Executing migration" id="set path collation in file table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.736497035Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=66.531µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.740713475Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.740782765Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=70.1µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.744847014Z level=info msg="Executing migration" id="managed permissions migration" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.745689572Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=843.118µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.751128153Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.751579137Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=451.314µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.756674575Z level=info msg="Executing migration" id="RBAC action name migrator" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.758385831Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.711926ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.7625512Z level=info msg="Executing migration" id="Add UID column to playlist" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.771398904Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.840304ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.774794916Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.774949697Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=155.191µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.779418829Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.780522Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.102881ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.784361816Z level=info msg="Executing migration" id="update group index for alert rules" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.785062372Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=701.476µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.790929848Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.791263311Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=334.013µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.795741813Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.7964962Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=754.717µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.802017752Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.811020497Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=8.999635ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.817129354Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.825739456Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.609571ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.831251967Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.832279817Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.0308ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.837554947Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.914434761Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=76.880864ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.917594401Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.918396418Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=802.187µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.922328915Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.923113713Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=784.608µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.926990269Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.952023605Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=25.036366ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.957050122Z level=info msg="Executing migration" id="add origin column to seed_assignment" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.965463421Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=8.412509ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.970872572Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.971097094Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=227.092µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.973982372Z level=info msg="Executing migration" id="prevent seeding OnCall access" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.974106613Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=121.601µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.979627675Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.979975428Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=347.933µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.98340616Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.983716353Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=310.563µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.988477878Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.988774801Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=297.423µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.992580107Z level=info msg="Executing migration" id="create folder table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.994108571Z level=info msg="Migration successfully executed" id="create folder table" duration=1.529914ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:28.998566023Z level=info msg="Executing migration" id="Add index for parent_uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.00035465Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.787867ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.005105905Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.006189675Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.08384ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.009485617Z level=info msg="Executing migration" id="Update folder title length" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.009511887Z level=info msg="Migration successfully executed" id="Update folder title length" duration=27.25µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.012473616Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.014280663Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.806377ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.021024047Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.022094037Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.06998ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.025801143Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.027471939Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.664116ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.031917951Z level=info msg="Executing migration" id="Sync dashboard and folder table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.032576997Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=667.906µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.037643936Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.037889228Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=245.562µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.041525233Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.042834995Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.308982ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.04646649Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.048309467Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.842387ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.053450626Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.054576697Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.126681ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.060925318Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.062909967Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.983629ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.068052476Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.069197647Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.144731ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.074134164Z level=info msg="Executing migration" id="create anon_device table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.075148873Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.013019ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.082988498Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.08426181Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.274032ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.089021936Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.090236027Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.213871ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.093759871Z level=info msg="Executing migration" id="create signing_key table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.094613999Z level=info msg="Migration successfully executed" id="create signing_key table" duration=854.148µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.099177572Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.100633346Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.455334ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.106080218Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.107603163Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.518345ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.111280668Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.111742522Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=464.474µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.116693739Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.125189761Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=8.495991ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.129906855Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.130468251Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=562.066µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.134883213Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.134945874Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=65.861µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.139457847Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.141009141Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.550024ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.146218031Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.146244511Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=28.1µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.151610262Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.153013456Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.406314ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.156855712Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.158744761Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.887438ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.166290622Z level=info msg="Executing migration" id="create sso_setting table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.168209711Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.918999ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.172861685Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.174519001Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.658706ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.179542159Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.179982993Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=447.264µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.185452445Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.186235653Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=789.438µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.191862946Z level=info msg="Executing migration" id="create cloud_migration table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.193369151Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.510945ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.197253328Z level=info msg="Executing migration" id="create cloud_migration_run table v1" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.198782932Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.529924ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.203322876Z level=info msg="Executing migration" id="add stack_id column" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.212862737Z level=info msg="Migration successfully executed" id="add stack_id column" duration=9.541291ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.217599592Z level=info msg="Executing migration" id="add region_slug column" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.226197854Z level=info msg="Migration successfully executed" id="add region_slug column" duration=8.602752ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.230619296Z level=info msg="Executing migration" id="add cluster_slug column" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.240050286Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=9.43063ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.24464085Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.244800231Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=160.001µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.249970981Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.259538562Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.567082ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.263047255Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.270086592Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=7.038177ms 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.273259843Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.273637306Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=377.693µs 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.276769196Z level=info msg="migrations completed" performed=558 skipped=0 duration=4.831043293s 23:16:08 grafana | logger=migrator t=2024-06-06T23:13:29.277464983Z level=info msg="Unlocking database" 23:16:08 grafana | logger=sqlstore t=2024-06-06T23:13:29.29188929Z level=info msg="Created default admin" user=admin 23:16:08 grafana | logger=sqlstore t=2024-06-06T23:13:29.292185813Z level=info msg="Created default organization" 23:16:08 grafana | logger=secrets t=2024-06-06T23:13:29.300014488Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:16:08 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-06-06T23:13:29.341851507Z level=info msg="Restored cache from database" duration=452.865µs 23:16:08 grafana | logger=plugin.store t=2024-06-06T23:13:29.343152869Z level=info msg="Loading plugins..." 23:16:08 grafana | logger=plugins.registration t=2024-06-06T23:13:29.375223375Z level=error msg="Could not register plugin" pluginId=xychart error="plugin xychart is already registered" 23:16:08 grafana | logger=plugins.initialization t=2024-06-06T23:13:29.375246055Z level=error msg="Could not initialize plugin" pluginId=xychart error="plugin xychart is already registered" 23:16:08 grafana | logger=local.finder t=2024-06-06T23:13:29.375349626Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 23:16:08 grafana | logger=plugin.store t=2024-06-06T23:13:29.375364956Z level=info msg="Plugins loaded" count=54 duration=32.212647ms 23:16:08 grafana | logger=query_data t=2024-06-06T23:13:29.380033721Z level=info msg="Query Service initialization" 23:16:08 grafana | logger=live.push_http t=2024-06-06T23:13:29.383626945Z level=info msg="Live Push Gateway initialization" 23:16:08 grafana | logger=ngalert.notifier.alertmanager org=1 t=2024-06-06T23:13:29.391098326Z level=info msg="Applying new configuration to Alertmanager" configHash=a013a3f424edb13bed8050eaf374d506 23:16:08 grafana | logger=ngalert.state.manager t=2024-06-06T23:13:29.399012342Z level=info msg="Running in alternative execution of Error/NoData mode" 23:16:08 grafana | logger=infra.usagestats.collector t=2024-06-06T23:13:29.401297684Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:16:08 grafana | logger=provisioning.datasources t=2024-06-06T23:13:29.404340293Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:16:08 grafana | logger=provisioning.alerting t=2024-06-06T23:13:29.424741297Z level=info msg="starting to provision alerting" 23:16:08 grafana | logger=provisioning.alerting t=2024-06-06T23:13:29.424762787Z level=info msg="finished to provision alerting" 23:16:08 grafana | logger=ngalert.state.manager t=2024-06-06T23:13:29.425691916Z level=info msg="Warming state cache for startup" 23:16:08 grafana | logger=grafanaStorageLogger t=2024-06-06T23:13:29.426399293Z level=info msg="Storage starting" 23:16:08 grafana | logger=http.server t=2024-06-06T23:13:29.431996916Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:16:08 grafana | logger=ngalert.multiorg.alertmanager t=2024-06-06T23:13:29.432303289Z level=info msg="Starting MultiOrg Alertmanager" 23:16:08 grafana | logger=plugins.update.checker t=2024-06-06T23:13:29.495964636Z level=info msg="Update check succeeded" duration=70.861785ms 23:16:08 grafana | logger=grafana.update.checker t=2024-06-06T23:13:29.50053272Z level=info msg="Update check succeeded" duration=75.53234ms 23:16:08 grafana | logger=provisioning.dashboard t=2024-06-06T23:13:29.793449373Z level=info msg="starting to provision dashboards" 23:16:08 grafana | logger=grafana-apiserver t=2024-06-06T23:13:30.03781141Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 23:16:08 grafana | logger=grafana-apiserver t=2024-06-06T23:13:30.038409405Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 23:16:08 grafana | logger=provisioning.dashboard t=2024-06-06T23:13:30.253939244Z level=info msg="finished to provision dashboards" 23:16:08 grafana | logger=ngalert.state.manager t=2024-06-06T23:13:30.310093463Z level=info msg="State cache has been initialized" states=0 duration=884.400167ms 23:16:08 grafana | logger=ngalert.scheduler t=2024-06-06T23:13:30.310165434Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 23:16:08 grafana | logger=ticker t=2024-06-06T23:13:30.310366226Z level=info msg=starting first_tick=2024-06-06T23:13:40Z 23:16:08 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-06-06T23:13:30.363084212Z level=info msg="Patterns update finished" duration=63.7331ms 23:16:08 grafana | logger=infra.usagestats t=2024-06-06T23:14:09.43640102Z level=info msg="Usage stats are ready to report" 23:16:08 =================================== 23:16:08 ======== Logs from kafka ======== 23:16:08 kafka | ===> User 23:16:08 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:08 kafka | ===> Configuring ... 23:16:08 kafka | Running in Zookeeper mode... 23:16:08 kafka | ===> Running preflight checks ... 23:16:08 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:16:08 kafka | ===> Check if Zookeeper is healthy ... 23:16:08 kafka | [2024-06-06 23:13:27,262] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,262] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,262] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,262] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,262] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,262] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,263] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,263] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,263] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,263] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,263] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,263] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,263] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,263] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,263] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,263] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,263] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,263] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,266] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:27,269] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:08 kafka | [2024-06-06 23:13:27,273] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:08 kafka | [2024-06-06 23:13:27,280] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:08 kafka | [2024-06-06 23:13:27,326] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 23:16:08 kafka | [2024-06-06 23:13:27,327] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:16:08 kafka | [2024-06-06 23:13:27,338] INFO Socket connection established, initiating session, client: /172.17.0.6:33396, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 23:16:08 kafka | [2024-06-06 23:13:27,379] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000002dd180000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:16:08 kafka | [2024-06-06 23:13:27,508] INFO EventThread shut down for session: 0x1000002dd180000 (org.apache.zookeeper.ClientCnxn) 23:16:08 kafka | [2024-06-06 23:13:27,508] INFO Session: 0x1000002dd180000 closed (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | Using log4j config /etc/kafka/log4j.properties 23:16:08 kafka | ===> Launching ... 23:16:08 kafka | ===> Launching kafka ... 23:16:08 kafka | [2024-06-06 23:13:28,252] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:16:08 kafka | [2024-06-06 23:13:28,596] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:08 kafka | [2024-06-06 23:13:28,670] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:16:08 kafka | [2024-06-06 23:13:28,671] INFO starting (kafka.server.KafkaServer) 23:16:08 kafka | [2024-06-06 23:13:28,672] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:16:08 kafka | [2024-06-06 23:13:28,693] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:16:08 kafka | [2024-06-06 23:13:28,699] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,699] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,699] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,699] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,699] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,699] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,700] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,700] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,700] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,700] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,700] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,700] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,701] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,701] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,701] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,701] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,701] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,701] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,703] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@66746f57 (org.apache.zookeeper.ZooKeeper) 23:16:08 kafka | [2024-06-06 23:13:28,708] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:08 kafka | [2024-06-06 23:13:28,713] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:08 kafka | [2024-06-06 23:13:28,715] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:16:08 kafka | [2024-06-06 23:13:28,718] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 23:16:08 kafka | [2024-06-06 23:13:28,726] INFO Socket connection established, initiating session, client: /172.17.0.6:33398, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 23:16:08 kafka | [2024-06-06 23:13:28,734] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000002dd180001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:16:08 kafka | [2024-06-06 23:13:28,738] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:16:08 kafka | [2024-06-06 23:13:29,052] INFO Cluster ID = g5nVYXh1RN-GfsPixxR24w (kafka.server.KafkaServer) 23:16:08 kafka | [2024-06-06 23:13:29,055] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:16:08 kafka | [2024-06-06 23:13:29,100] INFO KafkaConfig values: 23:16:08 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:16:08 kafka | alter.config.policy.class.name = null 23:16:08 kafka | alter.log.dirs.replication.quota.window.num = 11 23:16:08 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:16:08 kafka | authorizer.class.name = 23:16:08 kafka | auto.create.topics.enable = true 23:16:08 kafka | auto.include.jmx.reporter = true 23:16:08 kafka | auto.leader.rebalance.enable = true 23:16:08 kafka | background.threads = 10 23:16:08 kafka | broker.heartbeat.interval.ms = 2000 23:16:08 kafka | broker.id = 1 23:16:08 kafka | broker.id.generation.enable = true 23:16:08 kafka | broker.rack = null 23:16:08 kafka | broker.session.timeout.ms = 9000 23:16:08 kafka | client.quota.callback.class = null 23:16:08 kafka | compression.type = producer 23:16:08 kafka | connection.failed.authentication.delay.ms = 100 23:16:08 kafka | connections.max.idle.ms = 600000 23:16:08 kafka | connections.max.reauth.ms = 0 23:16:08 kafka | control.plane.listener.name = null 23:16:08 kafka | controlled.shutdown.enable = true 23:16:08 kafka | controlled.shutdown.max.retries = 3 23:16:08 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:16:08 kafka | controller.listener.names = null 23:16:08 kafka | controller.quorum.append.linger.ms = 25 23:16:08 kafka | controller.quorum.election.backoff.max.ms = 1000 23:16:08 kafka | controller.quorum.election.timeout.ms = 1000 23:16:08 kafka | controller.quorum.fetch.timeout.ms = 2000 23:16:08 kafka | controller.quorum.request.timeout.ms = 2000 23:16:08 kafka | controller.quorum.retry.backoff.ms = 20 23:16:08 kafka | controller.quorum.voters = [] 23:16:08 kafka | controller.quota.window.num = 11 23:16:08 kafka | controller.quota.window.size.seconds = 1 23:16:08 kafka | controller.socket.timeout.ms = 30000 23:16:08 kafka | create.topic.policy.class.name = null 23:16:08 kafka | default.replication.factor = 1 23:16:08 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:16:08 kafka | delegation.token.expiry.time.ms = 86400000 23:16:08 kafka | delegation.token.master.key = null 23:16:08 kafka | delegation.token.max.lifetime.ms = 604800000 23:16:08 kafka | delegation.token.secret.key = null 23:16:08 kafka | delete.records.purgatory.purge.interval.requests = 1 23:16:08 kafka | delete.topic.enable = true 23:16:08 kafka | early.start.listeners = null 23:16:08 kafka | fetch.max.bytes = 57671680 23:16:08 kafka | fetch.purgatory.purge.interval.requests = 1000 23:16:08 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 23:16:08 kafka | group.consumer.heartbeat.interval.ms = 5000 23:16:08 kafka | group.consumer.max.heartbeat.interval.ms = 15000 23:16:08 kafka | group.consumer.max.session.timeout.ms = 60000 23:16:08 kafka | group.consumer.max.size = 2147483647 23:16:08 kafka | group.consumer.min.heartbeat.interval.ms = 5000 23:16:08 kafka | group.consumer.min.session.timeout.ms = 45000 23:16:08 kafka | group.consumer.session.timeout.ms = 45000 23:16:08 kafka | group.coordinator.new.enable = false 23:16:08 kafka | group.coordinator.threads = 1 23:16:08 kafka | group.initial.rebalance.delay.ms = 3000 23:16:08 kafka | group.max.session.timeout.ms = 1800000 23:16:08 kafka | group.max.size = 2147483647 23:16:08 kafka | group.min.session.timeout.ms = 6000 23:16:08 kafka | initial.broker.registration.timeout.ms = 60000 23:16:08 kafka | inter.broker.listener.name = PLAINTEXT 23:16:08 kafka | inter.broker.protocol.version = 3.6-IV2 23:16:08 kafka | kafka.metrics.polling.interval.secs = 10 23:16:08 kafka | kafka.metrics.reporters = [] 23:16:08 kafka | leader.imbalance.check.interval.seconds = 300 23:16:08 kafka | leader.imbalance.per.broker.percentage = 10 23:16:08 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:16:08 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:16:08 kafka | log.cleaner.backoff.ms = 15000 23:16:08 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:16:08 kafka | log.cleaner.delete.retention.ms = 86400000 23:16:08 kafka | log.cleaner.enable = true 23:16:08 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:16:08 kafka | log.cleaner.io.buffer.size = 524288 23:16:08 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:16:08 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:16:08 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:16:08 kafka | log.cleaner.min.compaction.lag.ms = 0 23:16:08 kafka | log.cleaner.threads = 1 23:16:08 kafka | log.cleanup.policy = [delete] 23:16:08 kafka | log.dir = /tmp/kafka-logs 23:16:08 kafka | log.dirs = /var/lib/kafka/data 23:16:08 kafka | log.flush.interval.messages = 9223372036854775807 23:16:08 kafka | log.flush.interval.ms = null 23:16:08 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:16:08 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:16:08 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:16:08 kafka | log.index.interval.bytes = 4096 23:16:08 kafka | log.index.size.max.bytes = 10485760 23:16:08 kafka | log.local.retention.bytes = -2 23:16:08 kafka | log.local.retention.ms = -2 23:16:08 kafka | log.message.downconversion.enable = true 23:16:08 kafka | log.message.format.version = 3.0-IV1 23:16:08 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 23:16:08 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 23:16:08 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:16:08 kafka | log.message.timestamp.type = CreateTime 23:16:08 kafka | log.preallocate = false 23:16:08 kafka | log.retention.bytes = -1 23:16:08 kafka | log.retention.check.interval.ms = 300000 23:16:08 kafka | log.retention.hours = 168 23:16:08 kafka | log.retention.minutes = null 23:16:08 kafka | log.retention.ms = null 23:16:08 kafka | log.roll.hours = 168 23:16:08 kafka | log.roll.jitter.hours = 0 23:16:08 kafka | log.roll.jitter.ms = null 23:16:08 kafka | log.roll.ms = null 23:16:08 kafka | log.segment.bytes = 1073741824 23:16:08 kafka | log.segment.delete.delay.ms = 60000 23:16:08 kafka | max.connection.creation.rate = 2147483647 23:16:08 kafka | max.connections = 2147483647 23:16:08 kafka | max.connections.per.ip = 2147483647 23:16:08 kafka | max.connections.per.ip.overrides = 23:16:08 kafka | max.incremental.fetch.session.cache.slots = 1000 23:16:08 kafka | message.max.bytes = 1048588 23:16:08 kafka | metadata.log.dir = null 23:16:08 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:16:08 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:16:08 kafka | metadata.log.segment.bytes = 1073741824 23:16:08 kafka | metadata.log.segment.min.bytes = 8388608 23:16:08 kafka | metadata.log.segment.ms = 604800000 23:16:08 kafka | metadata.max.idle.interval.ms = 500 23:16:08 kafka | metadata.max.retention.bytes = 104857600 23:16:08 kafka | metadata.max.retention.ms = 604800000 23:16:08 kafka | metric.reporters = [] 23:16:08 kafka | metrics.num.samples = 2 23:16:08 kafka | metrics.recording.level = INFO 23:16:08 kafka | metrics.sample.window.ms = 30000 23:16:08 kafka | min.insync.replicas = 1 23:16:08 kafka | node.id = 1 23:16:08 kafka | num.io.threads = 8 23:16:08 kafka | num.network.threads = 3 23:16:08 kafka | num.partitions = 1 23:16:08 kafka | num.recovery.threads.per.data.dir = 1 23:16:08 kafka | num.replica.alter.log.dirs.threads = null 23:16:08 kafka | num.replica.fetchers = 1 23:16:08 kafka | offset.metadata.max.bytes = 4096 23:16:08 kafka | offsets.commit.required.acks = -1 23:16:08 kafka | offsets.commit.timeout.ms = 5000 23:16:08 kafka | offsets.load.buffer.size = 5242880 23:16:08 kafka | offsets.retention.check.interval.ms = 600000 23:16:08 kafka | offsets.retention.minutes = 10080 23:16:08 kafka | offsets.topic.compression.codec = 0 23:16:08 kafka | offsets.topic.num.partitions = 50 23:16:08 kafka | offsets.topic.replication.factor = 1 23:16:08 kafka | offsets.topic.segment.bytes = 104857600 23:16:08 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:16:08 kafka | password.encoder.iterations = 4096 23:16:08 kafka | password.encoder.key.length = 128 23:16:08 kafka | password.encoder.keyfactory.algorithm = null 23:16:08 kafka | password.encoder.old.secret = null 23:16:08 kafka | password.encoder.secret = null 23:16:08 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:16:08 kafka | process.roles = [] 23:16:08 kafka | producer.id.expiration.check.interval.ms = 600000 23:16:08 kafka | producer.id.expiration.ms = 86400000 23:16:08 kafka | producer.purgatory.purge.interval.requests = 1000 23:16:08 kafka | queued.max.request.bytes = -1 23:16:08 kafka | queued.max.requests = 500 23:16:08 kafka | quota.window.num = 11 23:16:08 kafka | quota.window.size.seconds = 1 23:16:08 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:16:08 kafka | remote.log.manager.task.interval.ms = 30000 23:16:08 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:16:08 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:16:08 kafka | remote.log.manager.task.retry.jitter = 0.2 23:16:08 kafka | remote.log.manager.thread.pool.size = 10 23:16:08 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 23:16:08 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 23:16:08 kafka | remote.log.metadata.manager.class.path = null 23:16:08 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 23:16:08 kafka | remote.log.metadata.manager.listener.name = null 23:16:08 kafka | remote.log.reader.max.pending.tasks = 100 23:16:08 kafka | remote.log.reader.threads = 10 23:16:08 kafka | remote.log.storage.manager.class.name = null 23:16:08 kafka | remote.log.storage.manager.class.path = null 23:16:08 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 23:16:08 kafka | remote.log.storage.system.enable = false 23:16:08 kafka | replica.fetch.backoff.ms = 1000 23:16:08 kafka | replica.fetch.max.bytes = 1048576 23:16:08 kafka | replica.fetch.min.bytes = 1 23:16:08 kafka | replica.fetch.response.max.bytes = 10485760 23:16:08 kafka | replica.fetch.wait.max.ms = 500 23:16:08 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:16:08 kafka | replica.lag.time.max.ms = 30000 23:16:08 kafka | replica.selector.class = null 23:16:08 kafka | replica.socket.receive.buffer.bytes = 65536 23:16:08 kafka | replica.socket.timeout.ms = 30000 23:16:08 kafka | replication.quota.window.num = 11 23:16:08 kafka | replication.quota.window.size.seconds = 1 23:16:08 kafka | request.timeout.ms = 30000 23:16:08 kafka | reserved.broker.max.id = 1000 23:16:08 kafka | sasl.client.callback.handler.class = null 23:16:08 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:16:08 kafka | sasl.jaas.config = null 23:16:08 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:08 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:16:08 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:16:08 kafka | sasl.kerberos.service.name = null 23:16:08 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:08 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:08 kafka | sasl.login.callback.handler.class = null 23:16:08 kafka | sasl.login.class = null 23:16:08 kafka | sasl.login.connect.timeout.ms = null 23:16:08 kafka | sasl.login.read.timeout.ms = null 23:16:08 kafka | sasl.login.refresh.buffer.seconds = 300 23:16:08 kafka | sasl.login.refresh.min.period.seconds = 60 23:16:08 kafka | sasl.login.refresh.window.factor = 0.8 23:16:08 kafka | sasl.login.refresh.window.jitter = 0.05 23:16:08 kafka | sasl.login.retry.backoff.max.ms = 10000 23:16:08 kafka | sasl.login.retry.backoff.ms = 100 23:16:08 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:16:08 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:16:08 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:16:08 kafka | sasl.oauthbearer.expected.audience = null 23:16:08 kafka | sasl.oauthbearer.expected.issuer = null 23:16:08 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:08 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:08 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:08 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:16:08 kafka | sasl.oauthbearer.scope.claim.name = scope 23:16:08 kafka | sasl.oauthbearer.sub.claim.name = sub 23:16:08 kafka | sasl.oauthbearer.token.endpoint.url = null 23:16:08 kafka | sasl.server.callback.handler.class = null 23:16:08 kafka | sasl.server.max.receive.size = 524288 23:16:08 kafka | security.inter.broker.protocol = PLAINTEXT 23:16:08 kafka | security.providers = null 23:16:08 kafka | server.max.startup.time.ms = 9223372036854775807 23:16:08 kafka | socket.connection.setup.timeout.max.ms = 30000 23:16:08 kafka | socket.connection.setup.timeout.ms = 10000 23:16:08 kafka | socket.listen.backlog.size = 50 23:16:08 kafka | socket.receive.buffer.bytes = 102400 23:16:08 kafka | socket.request.max.bytes = 104857600 23:16:08 kafka | socket.send.buffer.bytes = 102400 23:16:08 kafka | ssl.cipher.suites = [] 23:16:08 kafka | ssl.client.auth = none 23:16:08 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:08 kafka | ssl.endpoint.identification.algorithm = https 23:16:08 kafka | ssl.engine.factory.class = null 23:16:08 kafka | ssl.key.password = null 23:16:08 kafka | ssl.keymanager.algorithm = SunX509 23:16:08 kafka | ssl.keystore.certificate.chain = null 23:16:08 kafka | ssl.keystore.key = null 23:16:08 kafka | ssl.keystore.location = null 23:16:08 kafka | ssl.keystore.password = null 23:16:08 kafka | ssl.keystore.type = JKS 23:16:08 kafka | ssl.principal.mapping.rules = DEFAULT 23:16:08 kafka | ssl.protocol = TLSv1.3 23:16:08 kafka | ssl.provider = null 23:16:08 kafka | ssl.secure.random.implementation = null 23:16:08 kafka | ssl.trustmanager.algorithm = PKIX 23:16:08 kafka | ssl.truststore.certificates = null 23:16:08 kafka | ssl.truststore.location = null 23:16:08 kafka | ssl.truststore.password = null 23:16:08 kafka | ssl.truststore.type = JKS 23:16:08 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:16:08 kafka | transaction.max.timeout.ms = 900000 23:16:08 kafka | transaction.partition.verification.enable = true 23:16:08 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:16:08 kafka | transaction.state.log.load.buffer.size = 5242880 23:16:08 kafka | transaction.state.log.min.isr = 2 23:16:08 kafka | transaction.state.log.num.partitions = 50 23:16:08 kafka | transaction.state.log.replication.factor = 3 23:16:08 kafka | transaction.state.log.segment.bytes = 104857600 23:16:08 kafka | transactional.id.expiration.ms = 604800000 23:16:08 kafka | unclean.leader.election.enable = false 23:16:08 kafka | unstable.api.versions.enable = false 23:16:08 kafka | zookeeper.clientCnxnSocket = null 23:16:08 kafka | zookeeper.connect = zookeeper:2181 23:16:08 kafka | zookeeper.connection.timeout.ms = null 23:16:08 kafka | zookeeper.max.in.flight.requests = 10 23:16:08 kafka | zookeeper.metadata.migration.enable = false 23:16:08 kafka | zookeeper.metadata.migration.min.batch.size = 200 23:16:08 kafka | zookeeper.session.timeout.ms = 18000 23:16:08 kafka | zookeeper.set.acl = false 23:16:08 kafka | zookeeper.ssl.cipher.suites = null 23:16:08 kafka | zookeeper.ssl.client.enable = false 23:16:08 kafka | zookeeper.ssl.crl.enable = false 23:16:08 kafka | zookeeper.ssl.enabled.protocols = null 23:16:08 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:16:08 kafka | zookeeper.ssl.keystore.location = null 23:16:08 kafka | zookeeper.ssl.keystore.password = null 23:16:08 kafka | zookeeper.ssl.keystore.type = null 23:16:08 kafka | zookeeper.ssl.ocsp.enable = false 23:16:08 kafka | zookeeper.ssl.protocol = TLSv1.2 23:16:08 kafka | zookeeper.ssl.truststore.location = null 23:16:08 kafka | zookeeper.ssl.truststore.password = null 23:16:08 kafka | zookeeper.ssl.truststore.type = null 23:16:08 kafka | (kafka.server.KafkaConfig) 23:16:08 kafka | [2024-06-06 23:13:29,127] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:08 kafka | [2024-06-06 23:13:29,128] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:08 kafka | [2024-06-06 23:13:29,132] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:08 kafka | [2024-06-06 23:13:29,135] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:08 kafka | [2024-06-06 23:13:29,163] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:13:29,166] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:13:29,176] INFO Loaded 0 logs in 13ms (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:13:29,177] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:13:29,178] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:13:29,193] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:16:08 kafka | [2024-06-06 23:13:29,236] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 23:16:08 kafka | [2024-06-06 23:13:29,251] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:16:08 kafka | [2024-06-06 23:13:29,263] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:16:08 kafka | [2024-06-06 23:13:29,306] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:08 kafka | [2024-06-06 23:13:29,636] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:08 kafka | [2024-06-06 23:13:29,654] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:16:08 kafka | [2024-06-06 23:13:29,654] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:08 kafka | [2024-06-06 23:13:29,659] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:16:08 kafka | [2024-06-06 23:13:29,663] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:08 kafka | [2024-06-06 23:13:29,686] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:08 kafka | [2024-06-06 23:13:29,692] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:08 kafka | [2024-06-06 23:13:29,699] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:08 kafka | [2024-06-06 23:13:29,701] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:08 kafka | [2024-06-06 23:13:29,703] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:08 kafka | [2024-06-06 23:13:29,718] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:16:08 kafka | [2024-06-06 23:13:29,719] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 23:16:08 kafka | [2024-06-06 23:13:29,743] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:16:08 kafka | [2024-06-06 23:13:29,769] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1717715609756,1717715609756,1,0,0,72057606337200129,258,0,27 23:16:08 kafka | (kafka.zk.KafkaZkClient) 23:16:08 kafka | [2024-06-06 23:13:29,770] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:16:08 kafka | [2024-06-06 23:13:29,816] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:16:08 kafka | [2024-06-06 23:13:29,822] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:08 kafka | [2024-06-06 23:13:29,828] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:08 kafka | [2024-06-06 23:13:29,828] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:08 kafka | [2024-06-06 23:13:29,840] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:13:29,841] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:16:08 kafka | [2024-06-06 23:13:29,844] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:13:29,849] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:29,852] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:29,856] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:16:08 kafka | [2024-06-06 23:13:29,857] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:08 kafka | [2024-06-06 23:13:29,860] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:16:08 kafka | [2024-06-06 23:13:29,860] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:08 kafka | [2024-06-06 23:13:29,890] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 23:16:08 kafka | [2024-06-06 23:13:29,890] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:29,895] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:29,896] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:08 kafka | [2024-06-06 23:13:29,899] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:29,901] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:29,918] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:29,923] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:29,923] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:16:08 kafka | [2024-06-06 23:13:29,935] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:16:08 kafka | [2024-06-06 23:13:29,935] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:16:08 kafka | [2024-06-06 23:13:29,941] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:16:08 kafka | [2024-06-06 23:13:29,944] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:16:08 kafka | [2024-06-06 23:13:29,958] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:16:08 kafka | [2024-06-06 23:13:29,958] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) 23:16:08 kafka | [2024-06-06 23:13:29,958] INFO Kafka startTimeMs: 1717715609953 (org.apache.kafka.common.utils.AppInfoParser) 23:16:08 kafka | [2024-06-06 23:13:29,960] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:16:08 kafka | [2024-06-06 23:13:29,966] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:16:08 kafka | [2024-06-06 23:13:29,967] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:29,967] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:29,967] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:29,968] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:29,970] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:29,970] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:29,971] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:29,971] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:16:08 kafka | [2024-06-06 23:13:29,971] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:29,974] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:16:08 kafka | [2024-06-06 23:13:29,979] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:16:08 kafka | [2024-06-06 23:13:29,980] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:08 kafka | [2024-06-06 23:13:29,992] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:08 kafka | [2024-06-06 23:13:29,993] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:16:08 kafka | [2024-06-06 23:13:29,994] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:16:08 kafka | [2024-06-06 23:13:29,995] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:16:08 kafka | [2024-06-06 23:13:29,996] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:16:08 kafka | [2024-06-06 23:13:30,005] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:16:08 kafka | [2024-06-06 23:13:30,006] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:30,012] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:30,013] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:30,013] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:30,014] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:30,015] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:30,029] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:30,051] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:13:30,069] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:08 kafka | [2024-06-06 23:13:30,117] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:08 kafka | [2024-06-06 23:13:35,030] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:13:35,031] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:14:01,891] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:08 kafka | [2024-06-06 23:14:01,894] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:08 kafka | [2024-06-06 23:14:01,901] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:14:01,906] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:14:01,920] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(Xe7UbtYQS5qd18dV_gK_7A),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:14:01,921] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:14:01,923] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:01,924] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:01,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:01,928] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:01,953] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:01,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:01,964] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:01,967] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:01,981] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:01,984] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:01,991] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:01,995] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:01,999] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(Npe-oTx6RBWFMl9qUMicZQ),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:14:02,000] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:16:08 kafka | [2024-06-06 23:14:02,001] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,001] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,002] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,002] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,002] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,002] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,002] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,002] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,002] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,003] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,003] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,003] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,003] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,003] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,003] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,003] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,003] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,003] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,003] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,004] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,004] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,004] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,004] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,004] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,004] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,004] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,004] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,004] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,004] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,005] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,005] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,005] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,005] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,005] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,005] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,005] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,005] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,005] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,005] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,007] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,009] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,009] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,010] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,013] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,036] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,044] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) 23:16:08 kafka | [2024-06-06 23:14:02,045] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,120] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,138] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,141] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,142] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,143] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(Xe7UbtYQS5qd18dV_gK_7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,156] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,174] INFO [Broker id=1] Finished LeaderAndIsr request in 184ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,197] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=Xe7UbtYQS5qd18dV_gK_7A, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,207] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,209] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,210] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,229] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,229] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,229] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,229] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,229] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,229] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,230] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,230] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,230] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,230] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,230] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,230] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,230] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,231] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,231] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,231] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,231] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,231] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,231] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,232] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,232] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,232] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,232] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,232] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,232] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,233] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,233] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,233] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,233] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,233] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,233] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,234] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,234] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,234] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,234] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,234] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,234] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,235] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,235] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,235] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,235] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,235] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,235] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,236] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,236] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,236] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,236] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,236] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,236] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,236] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,237] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,237] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,237] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,237] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,237] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,237] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,238] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,238] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,238] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,238] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,238] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,238] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,238] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,239] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,239] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,239] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,239] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,239] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,239] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,239] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,239] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,240] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,240] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,240] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,240] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,240] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,240] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,241] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,241] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,241] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,241] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,241] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,241] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,241] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,242] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,242] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,242] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,242] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,242] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,243] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,243] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,243] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,243] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,243] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,243] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,243] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,243] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,243] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,243] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,243] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,243] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,244] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,247] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,252] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,252] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,252] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,252] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,252] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,252] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,253] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,253] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,253] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,253] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,253] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,253] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,253] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,253] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,253] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,254] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,254] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,254] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,254] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,254] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,254] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,254] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,255] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,255] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,255] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,255] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,255] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,255] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,255] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,256] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,256] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,256] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,256] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,256] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,256] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,256] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,256] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,256] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,257] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,257] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,257] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,257] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,257] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,257] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,295] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,299] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:16:08 kafka | [2024-06-06 23:14:02,301] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,317] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,319] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,320] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,321] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,321] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,332] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,332] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,333] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,333] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,333] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,341] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,342] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,342] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,342] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,342] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,349] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,350] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,350] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,352] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,352] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,363] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,364] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,364] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,364] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,364] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,383] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,386] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,386] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,386] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,387] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,395] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,395] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,395] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,395] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,395] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,405] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,405] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,405] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,405] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,405] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,411] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,411] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,412] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,412] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,412] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,419] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,419] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,419] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,420] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,420] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,426] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,426] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,426] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,426] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,426] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,432] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,433] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,433] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,433] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,433] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,440] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,441] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,441] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,441] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,441] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,448] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,448] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,448] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,448] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,448] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,456] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,457] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,457] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,457] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,457] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,464] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,465] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,465] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,465] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,465] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,470] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,472] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,472] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,472] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,473] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,481] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,481] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,481] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,481] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,482] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,489] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,489] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,489] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,489] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,489] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,495] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,496] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,496] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,496] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,496] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,504] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,504] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,504] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,504] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,504] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,511] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,512] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,512] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,512] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,512] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,518] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,519] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,519] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,519] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,519] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,528] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,529] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,529] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,529] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,530] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,542] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,543] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,543] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,543] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,543] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,552] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,552] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,552] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,552] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,552] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,564] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,564] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,565] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,565] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,565] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,574] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,574] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,574] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,574] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,574] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,586] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,587] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,587] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,587] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,587] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,597] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,598] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,598] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,598] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,598] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,609] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,610] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,610] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,610] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,610] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,618] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,618] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,618] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,619] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,619] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,629] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,631] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,631] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,631] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,631] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,642] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,643] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,643] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,643] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,643] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,650] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,651] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,651] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,652] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,652] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,663] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,664] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,664] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,664] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,664] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,677] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,678] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,678] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,678] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,679] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,720] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,726] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,727] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,727] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,727] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,790] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,791] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,791] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,791] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,791] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,834] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,835] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,836] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,836] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,836] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,843] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,844] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,844] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,844] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,844] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,852] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,852] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,853] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,853] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,853] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,858] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,859] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,859] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,859] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,860] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,868] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,868] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,868] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,868] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,869] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,876] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,876] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,876] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,877] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,877] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,889] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,890] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,890] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,890] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,891] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,896] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,897] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,897] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,897] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,897] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,904] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,905] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,905] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,905] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,905] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,911] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,911] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,912] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,912] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,912] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,918] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:08 kafka | [2024-06-06 23:14:02,918] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:08 kafka | [2024-06-06 23:14:02,918] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,918] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:16:08 kafka | [2024-06-06 23:14:02,919] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(Npe-oTx6RBWFMl9qUMicZQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,922] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,922] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,922] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,922] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,922] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,922] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,922] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,923] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,923] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,923] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,923] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,923] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,923] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,923] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,923] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,924] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,924] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,924] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,924] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,924] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,924] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,924] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,924] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,925] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,925] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,925] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,925] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,925] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,925] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,925] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,925] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,926] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,926] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,926] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,926] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,926] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,926] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,926] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,926] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,927] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,927] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,927] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,927] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,927] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,927] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,927] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,927] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,928] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,928] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,928] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,930] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,932] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,934] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,934] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,934] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,934] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,934] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,934] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,934] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,934] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,934] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,934] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,935] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,935] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,935] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,935] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,935] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,935] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,935] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,935] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,935] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,935] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,935] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,936] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,936] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,936] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,936] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,936] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,936] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,936] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,936] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,936] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,936] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,937] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,937] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,937] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,937] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,937] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,937] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,937] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,937] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,937] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,937] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,937] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,938] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,938] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,938] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,938] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,938] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,938] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,938] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,938] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,938] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,938] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,938] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,939] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,939] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,939] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,939] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,939] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,939] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,939] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,939] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,939] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,939] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,939] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,940] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,940] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,940] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,940] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,940] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,940] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,940] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,940] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,940] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,940] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,941] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,941] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,941] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,941] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,941] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,941] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,941] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,941] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,941] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,941] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,941] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,941] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,942] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,942] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,942] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,942] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,942] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,942] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,942] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,942] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,942] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,942] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,943] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:02,943] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,943] INFO [Broker id=1] Finished LeaderAndIsr request in 691ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,943] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 10 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,944] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,944] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,945] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,945] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,945] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=Npe-oTx6RBWFMl9qUMicZQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,945] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,945] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,945] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,946] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,946] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,946] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,946] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,946] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,947] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,947] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,947] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,947] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,947] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,948] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,948] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,948] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,948] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,948] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,948] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,948] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,948] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,949] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,949] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,949] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,949] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,949] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,949] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,949] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,949] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,949] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,949] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,949] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,949] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,949] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,949] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,949] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,949] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,950] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,950] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,950] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,950] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,950] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,950] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,950] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,950] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,950] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,950] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,950] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,950] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,950] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,950] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,954] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,954] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,954] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,954] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,954] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,954] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,954] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,954] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,954] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,954] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,955] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,955] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,955] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,955] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,955] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,955] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,955] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,955] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,955] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,955] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,955] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,955] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,955] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,955] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,955] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,955] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,956] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,956] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,956] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,956] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,956] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:02,956] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,956] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,956] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,957] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,957] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,957] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,957] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,957] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,958] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,958] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,958] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,958] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,958] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,960] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:08 kafka | [2024-06-06 23:14:02,960] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:08 kafka | [2024-06-06 23:14:03,034] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-163a505b-7df5-4d24-8282-0d4b1fe2e9f5 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:03,035] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group c684b99a-d7a2-4465-b216-cb59f20e796f in Empty state. Created a new member id consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3-91acf397-7b46-4d0e-b697-59e0aa395f7f and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:03,052] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-163a505b-7df5-4d24-8282-0d4b1fe2e9f5 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:03,054] INFO [GroupCoordinator 1]: Preparing to rebalance group c684b99a-d7a2-4465-b216-cb59f20e796f in state PreparingRebalance with old generation 0 (__consumer_offsets-38) (reason: Adding new member consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3-91acf397-7b46-4d0e-b697-59e0aa395f7f with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:03,620] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group d005a2fb-e92a-479f-b1ce-62b4bf1c7829 in Empty state. Created a new member id consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2-6475633e-7420-47d1-831e-2edb4cb10f03 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:03,623] INFO [GroupCoordinator 1]: Preparing to rebalance group d005a2fb-e92a-479f-b1ce-62b4bf1c7829 in state PreparingRebalance with old generation 0 (__consumer_offsets-18) (reason: Adding new member consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2-6475633e-7420-47d1-831e-2edb4cb10f03 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:06,065] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:06,078] INFO [GroupCoordinator 1]: Stabilized group c684b99a-d7a2-4465-b216-cb59f20e796f generation 1 (__consumer_offsets-38) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:06,089] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-163a505b-7df5-4d24-8282-0d4b1fe2e9f5 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:06,090] INFO [GroupCoordinator 1]: Assignment received from leader consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3-91acf397-7b46-4d0e-b697-59e0aa395f7f for group c684b99a-d7a2-4465-b216-cb59f20e796f for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:06,624] INFO [GroupCoordinator 1]: Stabilized group d005a2fb-e92a-479f-b1ce-62b4bf1c7829 generation 1 (__consumer_offsets-18) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:08 kafka | [2024-06-06 23:14:06,637] INFO [GroupCoordinator 1]: Assignment received from leader consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2-6475633e-7420-47d1-831e-2edb4cb10f03 for group d005a2fb-e92a-479f-b1ce-62b4bf1c7829 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:08 =================================== 23:16:08 ======== Logs from mariadb ======== 23:16:08 mariadb | 2024-06-06 23:13:24+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:08 mariadb | 2024-06-06 23:13:24+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 23:16:08 mariadb | 2024-06-06 23:13:24+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:08 mariadb | 2024-06-06 23:13:24+00:00 [Note] [Entrypoint]: Initializing database files 23:16:08 mariadb | 2024-06-06 23:13:24 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:08 mariadb | 2024-06-06 23:13:24 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:08 mariadb | 2024-06-06 23:13:25 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:08 mariadb | 23:16:08 mariadb | 23:16:08 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 23:16:08 mariadb | To do so, start the server, then issue the following command: 23:16:08 mariadb | 23:16:08 mariadb | '/usr/bin/mysql_secure_installation' 23:16:08 mariadb | 23:16:08 mariadb | which will also give you the option of removing the test 23:16:08 mariadb | databases and anonymous user created by default. This is 23:16:08 mariadb | strongly recommended for production servers. 23:16:08 mariadb | 23:16:08 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 23:16:08 mariadb | 23:16:08 mariadb | Please report any problems at https://mariadb.org/jira 23:16:08 mariadb | 23:16:08 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 23:16:08 mariadb | 23:16:08 mariadb | Consider joining MariaDB's strong and vibrant community: 23:16:08 mariadb | https://mariadb.org/get-involved/ 23:16:08 mariadb | 23:16:08 mariadb | 2024-06-06 23:13:27+00:00 [Note] [Entrypoint]: Database files initialized 23:16:08 mariadb | 2024-06-06 23:13:27+00:00 [Note] [Entrypoint]: Starting temporary server 23:16:08 mariadb | 2024-06-06 23:13:27+00:00 [Note] [Entrypoint]: Waiting for server startup 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 95 ... 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Note] InnoDB: Number of transaction pools: 1 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Note] InnoDB: 128 rollback segments are active. 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Note] InnoDB: log sequence number 46590; transaction id 14 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 23:16:08 mariadb | 2024-06-06 23:13:27 0 [Note] mariadbd: ready for connections. 23:16:08 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 23:16:08 mariadb | 2024-06-06 23:13:28+00:00 [Note] [Entrypoint]: Temporary server started. 23:16:08 mariadb | 2024-06-06 23:13:30+00:00 [Note] [Entrypoint]: Creating user policy_user 23:16:08 mariadb | 2024-06-06 23:13:30+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 23:16:08 mariadb | 23:16:08 mariadb | 2024-06-06 23:13:30+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 23:16:08 mariadb | 23:16:08 mariadb | 2024-06-06 23:13:30+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 23:16:08 mariadb | #!/bin/bash -xv 23:16:08 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 23:16:08 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 23:16:08 mariadb | # 23:16:08 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 23:16:08 mariadb | # you may not use this file except in compliance with the License. 23:16:08 mariadb | # You may obtain a copy of the License at 23:16:08 mariadb | # 23:16:08 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 23:16:08 mariadb | # 23:16:08 mariadb | # Unless required by applicable law or agreed to in writing, software 23:16:08 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 23:16:08 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:16:08 mariadb | # See the License for the specific language governing permissions and 23:16:08 mariadb | # limitations under the License. 23:16:08 mariadb | 23:16:08 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:08 mariadb | do 23:16:08 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 23:16:08 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 23:16:08 mariadb | done 23:16:08 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:08 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 23:16:08 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:08 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:08 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 23:16:08 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:08 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:08 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 23:16:08 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:08 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:08 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 23:16:08 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:08 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:08 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 23:16:08 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:08 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:08 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 23:16:08 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:08 mariadb | 23:16:08 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 23:16:08 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 23:16:08 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 23:16:08 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 23:16:08 mariadb | 23:16:08 mariadb | 2024-06-06 23:13:31+00:00 [Note] [Entrypoint]: Stopping temporary server 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: FTS optimize thread exiting. 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: Starting shutdown... 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: Buffer pool(s) dump completed at 240606 23:13:31 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: Shutdown completed; log sequence number 328782; transaction id 298 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] mariadbd: Shutdown complete 23:16:08 mariadb | 23:16:08 mariadb | 2024-06-06 23:13:31+00:00 [Note] [Entrypoint]: Temporary server stopped 23:16:08 mariadb | 23:16:08 mariadb | 2024-06-06 23:13:31+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 23:16:08 mariadb | 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: Number of transaction pools: 1 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: 128 rollback segments are active. 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: log sequence number 328782; transaction id 299 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] Server socket created on IP: '0.0.0.0'. 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] Server socket created on IP: '::'. 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] mariadbd: ready for connections. 23:16:08 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 23:16:08 mariadb | 2024-06-06 23:13:31 0 [Note] InnoDB: Buffer pool(s) load completed at 240606 23:13:31 23:16:08 mariadb | 2024-06-06 23:13:31 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 23:16:08 mariadb | 2024-06-06 23:13:31 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 23:16:08 mariadb | 2024-06-06 23:13:32 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 23:16:08 mariadb | 2024-06-06 23:13:32 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 23:16:08 =================================== 23:16:08 ======== Logs from apex-pdp ======== 23:16:08 policy-apex-pdp | Waiting for mariadb port 3306... 23:16:08 policy-apex-pdp | mariadb (172.17.0.4:3306) open 23:16:08 policy-apex-pdp | Waiting for kafka port 9092... 23:16:08 policy-apex-pdp | kafka (172.17.0.6:9092) open 23:16:08 policy-apex-pdp | Waiting for pap port 6969... 23:16:08 policy-apex-pdp | pap (172.17.0.10:6969) open 23:16:08 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:16:08 policy-apex-pdp | [2024-06-06T23:14:02.825+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.012+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:08 policy-apex-pdp | allow.auto.create.topics = true 23:16:08 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:08 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:08 policy-apex-pdp | auto.offset.reset = latest 23:16:08 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:08 policy-apex-pdp | check.crcs = true 23:16:08 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:08 policy-apex-pdp | client.id = consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-1 23:16:08 policy-apex-pdp | client.rack = 23:16:08 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:08 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:08 policy-apex-pdp | enable.auto.commit = true 23:16:08 policy-apex-pdp | exclude.internal.topics = true 23:16:08 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:08 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:08 policy-apex-pdp | fetch.min.bytes = 1 23:16:08 policy-apex-pdp | group.id = d005a2fb-e92a-479f-b1ce-62b4bf1c7829 23:16:08 policy-apex-pdp | group.instance.id = null 23:16:08 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:08 policy-apex-pdp | interceptor.classes = [] 23:16:08 policy-apex-pdp | internal.leave.group.on.close = true 23:16:08 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:08 policy-apex-pdp | isolation.level = read_uncommitted 23:16:08 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:08 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:08 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:08 policy-apex-pdp | max.poll.records = 500 23:16:08 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:08 policy-apex-pdp | metric.reporters = [] 23:16:08 policy-apex-pdp | metrics.num.samples = 2 23:16:08 policy-apex-pdp | metrics.recording.level = INFO 23:16:08 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:08 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:08 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:08 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:08 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:08 policy-apex-pdp | request.timeout.ms = 30000 23:16:08 policy-apex-pdp | retry.backoff.ms = 100 23:16:08 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:08 policy-apex-pdp | sasl.jaas.config = null 23:16:08 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:08 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:08 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:08 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:08 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:08 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:08 policy-apex-pdp | sasl.login.class = null 23:16:08 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:08 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:08 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:08 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:08 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:08 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:08 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:08 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:08 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:08 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:08 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:08 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:08 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:08 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:08 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:08 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:08 policy-apex-pdp | security.providers = null 23:16:08 policy-apex-pdp | send.buffer.bytes = 131072 23:16:08 policy-apex-pdp | session.timeout.ms = 45000 23:16:08 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:08 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:08 policy-apex-pdp | ssl.cipher.suites = null 23:16:08 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:08 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:08 policy-apex-pdp | ssl.engine.factory.class = null 23:16:08 policy-apex-pdp | ssl.key.password = null 23:16:08 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:08 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:08 policy-apex-pdp | ssl.keystore.key = null 23:16:08 policy-apex-pdp | ssl.keystore.location = null 23:16:08 policy-apex-pdp | ssl.keystore.password = null 23:16:08 policy-apex-pdp | ssl.keystore.type = JKS 23:16:08 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:08 policy-apex-pdp | ssl.provider = null 23:16:08 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:08 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:08 policy-apex-pdp | ssl.truststore.certificates = null 23:16:08 policy-apex-pdp | ssl.truststore.location = null 23:16:08 policy-apex-pdp | ssl.truststore.password = null 23:16:08 policy-apex-pdp | ssl.truststore.type = JKS 23:16:08 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:08 policy-apex-pdp | 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.178+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.178+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.178+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1717715643176 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.181+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-1, groupId=d005a2fb-e92a-479f-b1ce-62b4bf1c7829] Subscribed to topic(s): policy-pdp-pap 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.193+00:00|INFO|ServiceManager|main] service manager starting 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.193+00:00|INFO|ServiceManager|main] service manager starting topics 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.195+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d005a2fb-e92a-479f-b1ce-62b4bf1c7829, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.217+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:08 policy-apex-pdp | allow.auto.create.topics = true 23:16:08 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:08 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:08 policy-apex-pdp | auto.offset.reset = latest 23:16:08 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:08 policy-apex-pdp | check.crcs = true 23:16:08 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:08 policy-apex-pdp | client.id = consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2 23:16:08 policy-apex-pdp | client.rack = 23:16:08 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:08 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:08 policy-apex-pdp | enable.auto.commit = true 23:16:08 policy-apex-pdp | exclude.internal.topics = true 23:16:08 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:08 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:08 policy-apex-pdp | fetch.min.bytes = 1 23:16:08 policy-apex-pdp | group.id = d005a2fb-e92a-479f-b1ce-62b4bf1c7829 23:16:08 policy-apex-pdp | group.instance.id = null 23:16:08 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:08 policy-apex-pdp | interceptor.classes = [] 23:16:08 policy-apex-pdp | internal.leave.group.on.close = true 23:16:08 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:08 policy-apex-pdp | isolation.level = read_uncommitted 23:16:08 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:08 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:08 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:08 policy-apex-pdp | max.poll.records = 500 23:16:08 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:08 policy-apex-pdp | metric.reporters = [] 23:16:08 policy-apex-pdp | metrics.num.samples = 2 23:16:08 policy-apex-pdp | metrics.recording.level = INFO 23:16:08 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:08 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:08 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:08 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:08 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:08 policy-apex-pdp | request.timeout.ms = 30000 23:16:08 policy-apex-pdp | retry.backoff.ms = 100 23:16:08 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:08 policy-apex-pdp | sasl.jaas.config = null 23:16:08 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:08 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:08 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:08 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:08 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:08 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:08 policy-apex-pdp | sasl.login.class = null 23:16:08 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:08 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:08 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:08 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:08 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:08 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:08 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:08 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:08 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:08 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:08 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:08 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:08 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:08 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:08 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:08 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:08 policy-apex-pdp | security.providers = null 23:16:08 policy-apex-pdp | send.buffer.bytes = 131072 23:16:08 policy-apex-pdp | session.timeout.ms = 45000 23:16:08 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:08 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:08 policy-apex-pdp | ssl.cipher.suites = null 23:16:08 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:08 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:08 policy-apex-pdp | ssl.engine.factory.class = null 23:16:08 policy-apex-pdp | ssl.key.password = null 23:16:08 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:08 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:08 policy-apex-pdp | ssl.keystore.key = null 23:16:08 policy-apex-pdp | ssl.keystore.location = null 23:16:08 policy-apex-pdp | ssl.keystore.password = null 23:16:08 policy-apex-pdp | ssl.keystore.type = JKS 23:16:08 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:08 policy-apex-pdp | ssl.provider = null 23:16:08 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:08 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:08 policy-apex-pdp | ssl.truststore.certificates = null 23:16:08 policy-apex-pdp | ssl.truststore.location = null 23:16:08 policy-apex-pdp | ssl.truststore.password = null 23:16:08 policy-apex-pdp | ssl.truststore.type = JKS 23:16:08 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:08 policy-apex-pdp | 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.230+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.230+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.230+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1717715643230 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.230+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2, groupId=d005a2fb-e92a-479f-b1ce-62b4bf1c7829] Subscribed to topic(s): policy-pdp-pap 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.231+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=1c88b35e-32e6-4d11-9d3a-cbdcb6f69eb7, alive=false, publisher=null]]: starting 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.243+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:08 policy-apex-pdp | acks = -1 23:16:08 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:08 policy-apex-pdp | batch.size = 16384 23:16:08 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:08 policy-apex-pdp | buffer.memory = 33554432 23:16:08 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:08 policy-apex-pdp | client.id = producer-1 23:16:08 policy-apex-pdp | compression.type = none 23:16:08 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:08 policy-apex-pdp | delivery.timeout.ms = 120000 23:16:08 policy-apex-pdp | enable.idempotence = true 23:16:08 policy-apex-pdp | interceptor.classes = [] 23:16:08 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:08 policy-apex-pdp | linger.ms = 0 23:16:08 policy-apex-pdp | max.block.ms = 60000 23:16:08 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:16:08 policy-apex-pdp | max.request.size = 1048576 23:16:08 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:08 policy-apex-pdp | metadata.max.idle.ms = 300000 23:16:08 policy-apex-pdp | metric.reporters = [] 23:16:08 policy-apex-pdp | metrics.num.samples = 2 23:16:08 policy-apex-pdp | metrics.recording.level = INFO 23:16:08 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:08 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:16:08 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:16:08 policy-apex-pdp | partitioner.class = null 23:16:08 policy-apex-pdp | partitioner.ignore.keys = false 23:16:08 policy-apex-pdp | receive.buffer.bytes = 32768 23:16:08 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:08 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:08 policy-apex-pdp | request.timeout.ms = 30000 23:16:08 policy-apex-pdp | retries = 2147483647 23:16:08 policy-apex-pdp | retry.backoff.ms = 100 23:16:08 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:08 policy-apex-pdp | sasl.jaas.config = null 23:16:08 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:08 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:08 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:08 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:08 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:08 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:08 policy-apex-pdp | sasl.login.class = null 23:16:08 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:08 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:08 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:08 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:08 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:08 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:08 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:08 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:08 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:08 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:08 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:08 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:08 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:08 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:08 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:08 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:08 policy-apex-pdp | security.providers = null 23:16:08 policy-apex-pdp | send.buffer.bytes = 131072 23:16:08 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:08 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:08 policy-apex-pdp | ssl.cipher.suites = null 23:16:08 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:08 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:08 policy-apex-pdp | ssl.engine.factory.class = null 23:16:08 policy-apex-pdp | ssl.key.password = null 23:16:08 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:08 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:08 policy-apex-pdp | ssl.keystore.key = null 23:16:08 policy-apex-pdp | ssl.keystore.location = null 23:16:08 policy-apex-pdp | ssl.keystore.password = null 23:16:08 policy-apex-pdp | ssl.keystore.type = JKS 23:16:08 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:08 policy-apex-pdp | ssl.provider = null 23:16:08 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:08 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:08 policy-apex-pdp | ssl.truststore.certificates = null 23:16:08 policy-apex-pdp | ssl.truststore.location = null 23:16:08 policy-apex-pdp | ssl.truststore.password = null 23:16:08 policy-apex-pdp | ssl.truststore.type = JKS 23:16:08 policy-apex-pdp | transaction.timeout.ms = 60000 23:16:08 policy-apex-pdp | transactional.id = null 23:16:08 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:08 policy-apex-pdp | 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.251+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.268+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.268+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.268+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1717715643268 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.268+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=1c88b35e-32e6-4d11-9d3a-cbdcb6f69eb7, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.268+00:00|INFO|ServiceManager|main] service manager starting set alive 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.268+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.270+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.271+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.272+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.273+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.273+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.273+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d005a2fb-e92a-479f-b1ce-62b4bf1c7829, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.273+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d005a2fb-e92a-479f-b1ce-62b4bf1c7829, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.273+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.288+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:16:08 policy-apex-pdp | [] 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.291+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:08 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"76dacb24-06ac-46e7-9764-a9431e5dde89","timestampMs":1717715643274,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup"} 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.435+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.436+00:00|INFO|ServiceManager|main] service manager starting 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.436+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.436+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.447+00:00|INFO|ServiceManager|main] service manager started 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.447+00:00|INFO|ServiceManager|main] service manager started 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.448+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.448+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.596+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: g5nVYXh1RN-GfsPixxR24w 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.597+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.600+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2, groupId=d005a2fb-e92a-479f-b1ce-62b4bf1c7829] Cluster ID: g5nVYXh1RN-GfsPixxR24w 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.601+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2, groupId=d005a2fb-e92a-479f-b1ce-62b4bf1c7829] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.607+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2, groupId=d005a2fb-e92a-479f-b1ce-62b4bf1c7829] (Re-)joining group 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.621+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2, groupId=d005a2fb-e92a-479f-b1ce-62b4bf1c7829] Request joining group due to: need to re-join with the given member-id: consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2-6475633e-7420-47d1-831e-2edb4cb10f03 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.621+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2, groupId=d005a2fb-e92a-479f-b1ce-62b4bf1c7829] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:08 policy-apex-pdp | [2024-06-06T23:14:03.621+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2, groupId=d005a2fb-e92a-479f-b1ce-62b4bf1c7829] (Re-)joining group 23:16:08 policy-apex-pdp | [2024-06-06T23:14:04.037+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:16:08 policy-apex-pdp | [2024-06-06T23:14:04.037+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:16:08 policy-apex-pdp | [2024-06-06T23:14:06.625+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2, groupId=d005a2fb-e92a-479f-b1ce-62b4bf1c7829] Successfully joined group with generation Generation{generationId=1, memberId='consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2-6475633e-7420-47d1-831e-2edb4cb10f03', protocol='range'} 23:16:08 policy-apex-pdp | [2024-06-06T23:14:06.633+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2, groupId=d005a2fb-e92a-479f-b1ce-62b4bf1c7829] Finished assignment for group at generation 1: {consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2-6475633e-7420-47d1-831e-2edb4cb10f03=Assignment(partitions=[policy-pdp-pap-0])} 23:16:08 policy-apex-pdp | [2024-06-06T23:14:06.640+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2, groupId=d005a2fb-e92a-479f-b1ce-62b4bf1c7829] Successfully synced group in generation Generation{generationId=1, memberId='consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2-6475633e-7420-47d1-831e-2edb4cb10f03', protocol='range'} 23:16:08 policy-apex-pdp | [2024-06-06T23:14:06.641+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2, groupId=d005a2fb-e92a-479f-b1ce-62b4bf1c7829] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:08 policy-apex-pdp | [2024-06-06T23:14:06.642+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2, groupId=d005a2fb-e92a-479f-b1ce-62b4bf1c7829] Adding newly assigned partitions: policy-pdp-pap-0 23:16:08 policy-apex-pdp | [2024-06-06T23:14:06.649+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2, groupId=d005a2fb-e92a-479f-b1ce-62b4bf1c7829] Found no committed offset for partition policy-pdp-pap-0 23:16:08 policy-apex-pdp | [2024-06-06T23:14:06.657+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d005a2fb-e92a-479f-b1ce-62b4bf1c7829-2, groupId=d005a2fb-e92a-479f-b1ce-62b4bf1c7829] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.272+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:08 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"25422dbf-4d29-4540-8d18-0942903ce7f5","timestampMs":1717715663272,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup"} 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.308+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:08 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"25422dbf-4d29-4540-8d18-0942903ce7f5","timestampMs":1717715663272,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup"} 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.313+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.460+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:08 policy-apex-pdp | {"source":"pap-572f2fc8-c911-4002-a0f5-f415c6c53859","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"df3ff21a-8e84-4d20-acf6-d526ebeec034","timestampMs":1717715663374,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.470+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.470+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:16:08 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c511d065-9252-4527-8656-96e116e0970b","timestampMs":1717715663470,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup"} 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.471+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:08 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"df3ff21a-8e84-4d20-acf6-d526ebeec034","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"54dc32fc-a685-4696-85c9-4db9e963fd29","timestampMs":1717715663471,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.490+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:08 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c511d065-9252-4527-8656-96e116e0970b","timestampMs":1717715663470,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup"} 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.491+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.491+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:08 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"df3ff21a-8e84-4d20-acf6-d526ebeec034","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"54dc32fc-a685-4696-85c9-4db9e963fd29","timestampMs":1717715663471,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.491+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.510+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:08 policy-apex-pdp | {"source":"pap-572f2fc8-c911-4002-a0f5-f415c6c53859","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c8104045-1dc2-4641-bf61-d87c07469005","timestampMs":1717715663375,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.513+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:08 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c8104045-1dc2-4641-bf61-d87c07469005","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"6e127b25-1551-4582-bde8-d75fc27d7b4f","timestampMs":1717715663513,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.523+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:08 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c8104045-1dc2-4641-bf61-d87c07469005","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"6e127b25-1551-4582-bde8-d75fc27d7b4f","timestampMs":1717715663513,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.523+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.572+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:08 policy-apex-pdp | {"source":"pap-572f2fc8-c911-4002-a0f5-f415c6c53859","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8b77f616-647f-44f7-9711-58d6973c2939","timestampMs":1717715663532,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.573+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:08 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8b77f616-647f-44f7-9711-58d6973c2939","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"b7736e57-6d5b-43a1-a903-5dfa48d6b475","timestampMs":1717715663573,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.596+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:08 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8b77f616-647f-44f7-9711-58d6973c2939","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"b7736e57-6d5b-43a1-a903-5dfa48d6b475","timestampMs":1717715663573,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-apex-pdp | [2024-06-06T23:14:23.597+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:08 policy-apex-pdp | [2024-06-06T23:14:56.170+00:00|INFO|RequestLog|qtp739264372-32] 172.17.0.5 - policyadmin [06/Jun/2024:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.52.0" 23:16:08 policy-apex-pdp | [2024-06-06T23:15:56.082+00:00|INFO|RequestLog|qtp739264372-28] 172.17.0.5 - policyadmin [06/Jun/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10654 "-" "Prometheus/2.52.0" 23:16:08 =================================== 23:16:08 ======== Logs from api ======== 23:16:08 policy-api | Waiting for mariadb port 3306... 23:16:08 policy-api | mariadb (172.17.0.4:3306) open 23:16:08 policy-api | Waiting for policy-db-migrator port 6824... 23:16:08 policy-api | policy-db-migrator (172.17.0.8:6824) open 23:16:08 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:16:08 policy-api | 23:16:08 policy-api | . ____ _ __ _ _ 23:16:08 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:08 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:08 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:08 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:08 policy-api | =========|_|==============|___/=/_/_/_/ 23:16:08 policy-api | :: Spring Boot :: (v3.1.10) 23:16:08 policy-api | 23:16:08 policy-api | [2024-06-06T23:13:39.556+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 23:16:08 policy-api | [2024-06-06T23:13:39.618+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:16:08 policy-api | [2024-06-06T23:13:39.619+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:08 policy-api | [2024-06-06T23:13:41.465+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:08 policy-api | [2024-06-06T23:13:41.540+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 67 ms. Found 6 JPA repository interfaces. 23:16:08 policy-api | [2024-06-06T23:13:41.942+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:08 policy-api | [2024-06-06T23:13:41.943+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:08 policy-api | [2024-06-06T23:13:42.553+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:08 policy-api | [2024-06-06T23:13:42.565+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:08 policy-api | [2024-06-06T23:13:42.568+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:08 policy-api | [2024-06-06T23:13:42.568+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 23:16:08 policy-api | [2024-06-06T23:13:42.667+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:08 policy-api | [2024-06-06T23:13:42.667+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2984 ms 23:16:08 policy-api | [2024-06-06T23:13:43.085+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:08 policy-api | [2024-06-06T23:13:43.172+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final 23:16:08 policy-api | [2024-06-06T23:13:43.223+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:08 policy-api | [2024-06-06T23:13:43.544+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:08 policy-api | [2024-06-06T23:13:43.574+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:08 policy-api | [2024-06-06T23:13:43.664+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@288ca5f0 23:16:08 policy-api | [2024-06-06T23:13:43.666+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:08 policy-api | [2024-06-06T23:13:45.607+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:08 policy-api | [2024-06-06T23:13:45.617+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:08 policy-api | [2024-06-06T23:13:46.785+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:16:08 policy-api | [2024-06-06T23:13:47.552+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:16:08 policy-api | [2024-06-06T23:13:48.638+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:08 policy-api | [2024-06-06T23:13:48.831+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@962e8c5, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@433e9108, org.springframework.security.web.context.SecurityContextHolderFilter@9091a0f, org.springframework.security.web.header.HeaderWriterFilter@2ff61c95, org.springframework.security.web.authentication.logout.LogoutFilter@7c453a1f, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@dd3586e, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@2f3181d9, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@5c8b15f5, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@70ac3a87, org.springframework.security.web.access.ExceptionTranslationFilter@46270641, org.springframework.security.web.access.intercept.AuthorizationFilter@25b2d26a] 23:16:08 policy-api | [2024-06-06T23:13:49.712+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:08 policy-api | [2024-06-06T23:13:49.808+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:08 policy-api | [2024-06-06T23:13:49.836+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 23:16:08 policy-api | [2024-06-06T23:13:49.855+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.0 seconds (process running for 11.639) 23:16:08 policy-api | [2024-06-06T23:14:36.832+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:08 policy-api | [2024-06-06T23:14:36.832+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:16:08 policy-api | [2024-06-06T23:14:36.833+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 23:16:08 policy-api | [2024-06-06T23:14:37.154+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 23:16:08 policy-api | [] 23:16:08 =================================== 23:16:08 ======== Logs from csit-tests ======== 23:16:08 policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot 23:16:08 policy-csit | Run Robot test 23:16:08 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 23:16:08 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 23:16:08 policy-csit | -v POLICY_API_IP:policy-api:6969 23:16:08 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 23:16:08 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 23:16:08 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 23:16:08 policy-csit | -v APEX_IP:policy-apex-pdp:6969 23:16:08 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 23:16:08 policy-csit | -v KAFKA_IP:kafka:9092 23:16:08 policy-csit | -v PROMETHEUS_IP:prometheus:9090 23:16:08 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 23:16:08 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 23:16:08 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 23:16:08 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 23:16:08 policy-csit | -v TEMP_FOLDER:/tmp/distribution 23:16:08 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 23:16:08 policy-csit | -v CLAMP_K8S_TEST: 23:16:08 policy-csit | Starting Robot test suites ... 23:16:08 policy-csit | ============================================================================== 23:16:08 policy-csit | Pap-Test & Pap-Slas 23:16:08 policy-csit | ============================================================================== 23:16:08 policy-csit | Pap-Test & Pap-Slas.Pap-Test 23:16:08 policy-csit | ============================================================================== 23:16:08 policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | Healthcheck :: Verify policy pap health check | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | 23:16:08 policy-csit | 22 tests, 22 passed, 0 failed 23:16:08 policy-csit | ============================================================================== 23:16:08 policy-csit | Pap-Test & Pap-Slas.Pap-Slas 23:16:08 policy-csit | ============================================================================== 23:16:08 policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:08 policy-csit | ------------------------------------------------------------------------------ 23:16:08 policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | 23:16:08 policy-csit | 8 tests, 8 passed, 0 failed 23:16:08 policy-csit | ============================================================================== 23:16:08 policy-csit | Pap-Test & Pap-Slas | PASS | 23:16:08 policy-csit | 30 tests, 30 passed, 0 failed 23:16:08 policy-csit | ============================================================================== 23:16:08 policy-csit | Output: /tmp/results/output.xml 23:16:08 policy-csit | Log: /tmp/results/log.html 23:16:08 policy-csit | Report: /tmp/results/report.html 23:16:08 policy-csit | RESULT: 0 23:16:08 =================================== 23:16:08 ======== Logs from policy-db-migrator ======== 23:16:08 policy-db-migrator | Waiting for mariadb port 3306... 23:16:08 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:08 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:08 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:08 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:08 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:08 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:08 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:08 policy-db-migrator | Connection to mariadb (172.17.0.4) 3306 port [tcp/mysql] succeeded! 23:16:08 policy-db-migrator | 321 blocks 23:16:08 policy-db-migrator | Preparing upgrade release version: 0800 23:16:08 policy-db-migrator | Preparing upgrade release version: 0900 23:16:08 policy-db-migrator | Preparing upgrade release version: 1000 23:16:08 policy-db-migrator | Preparing upgrade release version: 1100 23:16:08 policy-db-migrator | Preparing upgrade release version: 1200 23:16:08 policy-db-migrator | Preparing upgrade release version: 1300 23:16:08 policy-db-migrator | Done 23:16:08 policy-db-migrator | name version 23:16:08 policy-db-migrator | policyadmin 0 23:16:08 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:16:08 policy-db-migrator | upgrade: 0 -> 1300 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0470-pdp.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0100-pdp.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 23:16:08 policy-db-migrator | JOIN pdpstatistics b 23:16:08 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 23:16:08 policy-db-migrator | SET a.id = b.id 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0210-sequence.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0220-sequence.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0100-upgrade.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | select 'upgrade to 1100 completed' as msg 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | msg 23:16:08 policy-db-migrator | upgrade to 1100 completed 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | TRUNCATE TABLE sequence 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | DROP TABLE pdpstatistics 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | DROP TABLE statistics_sequence 23:16:08 policy-db-migrator | -------------- 23:16:08 policy-db-migrator | 23:16:08 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:16:08 policy-db-migrator | name version 23:16:08 policy-db-migrator | policyadmin 1300 23:16:08 policy-db-migrator | ID script operation from_version to_version tag success atTime 23:16:08 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:32 23:16:08 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:32 23:16:08 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:32 23:16:08 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:32 23:16:08 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:32 23:16:08 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:32 23:16:08 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:32 23:16:08 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:32 23:16:08 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:33 23:16:08 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:34 23:16:08 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:35 23:16:08 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0606242313320800u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 0606242313320900u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 0606242313320900u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 0606242313320900u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 0606242313320900u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 0606242313320900u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 0606242313320900u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0606242313320900u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0606242313320900u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0606242313320900u 1 2024-06-06 23:13:36 23:16:08 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 0606242313320900u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 0606242313320900u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 0606242313320900u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 0606242313320900u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 0606242313321000u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 0606242313321000u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 0606242313321000u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 0606242313321000u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 0606242313321000u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 0606242313321000u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 0606242313321000u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 0606242313321000u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 0606242313321000u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 0606242313321100u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 0606242313321200u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 0606242313321200u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 0606242313321200u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 0606242313321200u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 0606242313321300u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 0606242313321300u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 0606242313321300u 1 2024-06-06 23:13:37 23:16:08 policy-db-migrator | policyadmin: OK @ 1300 23:16:08 =================================== 23:16:08 ======== Logs from pap ======== 23:16:08 policy-pap | Waiting for mariadb port 3306... 23:16:08 policy-pap | mariadb (172.17.0.4:3306) open 23:16:08 policy-pap | Waiting for kafka port 9092... 23:16:08 policy-pap | kafka (172.17.0.6:9092) open 23:16:08 policy-pap | Waiting for api port 6969... 23:16:08 policy-pap | api (172.17.0.9:6969) open 23:16:08 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:16:08 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:16:08 policy-pap | 23:16:08 policy-pap | . ____ _ __ _ _ 23:16:08 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:08 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:08 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:08 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:08 policy-pap | =========|_|==============|___/=/_/_/_/ 23:16:08 policy-pap | :: Spring Boot :: (v3.1.10) 23:16:08 policy-pap | 23:16:08 policy-pap | [2024-06-06T23:13:52.201+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 23:16:08 policy-pap | [2024-06-06T23:13:52.272+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 34 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:16:08 policy-pap | [2024-06-06T23:13:52.273+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:08 policy-pap | [2024-06-06T23:13:54.262+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:08 policy-pap | [2024-06-06T23:13:54.360+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 88 ms. Found 7 JPA repository interfaces. 23:16:08 policy-pap | [2024-06-06T23:13:54.827+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:08 policy-pap | [2024-06-06T23:13:54.828+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:08 policy-pap | [2024-06-06T23:13:55.450+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:08 policy-pap | [2024-06-06T23:13:55.461+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:08 policy-pap | [2024-06-06T23:13:55.463+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:08 policy-pap | [2024-06-06T23:13:55.463+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 23:16:08 policy-pap | [2024-06-06T23:13:55.568+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:08 policy-pap | [2024-06-06T23:13:55.569+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3220 ms 23:16:08 policy-pap | [2024-06-06T23:13:56.004+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:08 policy-pap | [2024-06-06T23:13:56.057+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final 23:16:08 policy-pap | [2024-06-06T23:13:56.398+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:08 policy-pap | [2024-06-06T23:13:56.501+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@60cf62ad 23:16:08 policy-pap | [2024-06-06T23:13:56.503+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:08 policy-pap | [2024-06-06T23:13:56.533+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect 23:16:08 policy-pap | [2024-06-06T23:13:58.153+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 23:16:08 policy-pap | [2024-06-06T23:13:58.164+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:08 policy-pap | [2024-06-06T23:13:58.773+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 23:16:08 policy-pap | [2024-06-06T23:13:59.209+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 23:16:08 policy-pap | [2024-06-06T23:13:59.328+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 23:16:08 policy-pap | [2024-06-06T23:13:59.602+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:08 policy-pap | allow.auto.create.topics = true 23:16:08 policy-pap | auto.commit.interval.ms = 5000 23:16:08 policy-pap | auto.include.jmx.reporter = true 23:16:08 policy-pap | auto.offset.reset = latest 23:16:08 policy-pap | bootstrap.servers = [kafka:9092] 23:16:08 policy-pap | check.crcs = true 23:16:08 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:08 policy-pap | client.id = consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-1 23:16:08 policy-pap | client.rack = 23:16:08 policy-pap | connections.max.idle.ms = 540000 23:16:08 policy-pap | default.api.timeout.ms = 60000 23:16:08 policy-pap | enable.auto.commit = true 23:16:08 policy-pap | exclude.internal.topics = true 23:16:08 policy-pap | fetch.max.bytes = 52428800 23:16:08 policy-pap | fetch.max.wait.ms = 500 23:16:08 policy-pap | fetch.min.bytes = 1 23:16:08 policy-pap | group.id = c684b99a-d7a2-4465-b216-cb59f20e796f 23:16:08 policy-pap | group.instance.id = null 23:16:08 policy-pap | heartbeat.interval.ms = 3000 23:16:08 policy-pap | interceptor.classes = [] 23:16:08 policy-pap | internal.leave.group.on.close = true 23:16:08 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:08 policy-pap | isolation.level = read_uncommitted 23:16:08 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:08 policy-pap | max.partition.fetch.bytes = 1048576 23:16:08 policy-pap | max.poll.interval.ms = 300000 23:16:08 policy-pap | max.poll.records = 500 23:16:08 policy-pap | metadata.max.age.ms = 300000 23:16:08 policy-pap | metric.reporters = [] 23:16:08 policy-pap | metrics.num.samples = 2 23:16:08 policy-pap | metrics.recording.level = INFO 23:16:08 policy-pap | metrics.sample.window.ms = 30000 23:16:08 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:08 policy-pap | receive.buffer.bytes = 65536 23:16:08 policy-pap | reconnect.backoff.max.ms = 1000 23:16:08 policy-pap | reconnect.backoff.ms = 50 23:16:08 policy-pap | request.timeout.ms = 30000 23:16:08 policy-pap | retry.backoff.ms = 100 23:16:08 policy-pap | sasl.client.callback.handler.class = null 23:16:08 policy-pap | sasl.jaas.config = null 23:16:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:08 policy-pap | sasl.kerberos.service.name = null 23:16:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:08 policy-pap | sasl.login.callback.handler.class = null 23:16:08 policy-pap | sasl.login.class = null 23:16:08 policy-pap | sasl.login.connect.timeout.ms = null 23:16:08 policy-pap | sasl.login.read.timeout.ms = null 23:16:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:08 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:08 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:08 policy-pap | sasl.mechanism = GSSAPI 23:16:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:08 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:08 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:08 policy-pap | security.protocol = PLAINTEXT 23:16:08 policy-pap | security.providers = null 23:16:08 policy-pap | send.buffer.bytes = 131072 23:16:08 policy-pap | session.timeout.ms = 45000 23:16:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:08 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:08 policy-pap | ssl.cipher.suites = null 23:16:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:08 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:08 policy-pap | ssl.engine.factory.class = null 23:16:08 policy-pap | ssl.key.password = null 23:16:08 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:08 policy-pap | ssl.keystore.certificate.chain = null 23:16:08 policy-pap | ssl.keystore.key = null 23:16:08 policy-pap | ssl.keystore.location = null 23:16:08 policy-pap | ssl.keystore.password = null 23:16:08 policy-pap | ssl.keystore.type = JKS 23:16:08 policy-pap | ssl.protocol = TLSv1.3 23:16:08 policy-pap | ssl.provider = null 23:16:08 policy-pap | ssl.secure.random.implementation = null 23:16:08 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:08 policy-pap | ssl.truststore.certificates = null 23:16:08 policy-pap | ssl.truststore.location = null 23:16:08 policy-pap | ssl.truststore.password = null 23:16:08 policy-pap | ssl.truststore.type = JKS 23:16:08 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:08 policy-pap | 23:16:08 policy-pap | [2024-06-06T23:13:59.770+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:08 policy-pap | [2024-06-06T23:13:59.771+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:08 policy-pap | [2024-06-06T23:13:59.771+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1717715639769 23:16:08 policy-pap | [2024-06-06T23:13:59.773+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-1, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] Subscribed to topic(s): policy-pdp-pap 23:16:08 policy-pap | [2024-06-06T23:13:59.774+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:08 policy-pap | allow.auto.create.topics = true 23:16:08 policy-pap | auto.commit.interval.ms = 5000 23:16:08 policy-pap | auto.include.jmx.reporter = true 23:16:08 policy-pap | auto.offset.reset = latest 23:16:08 policy-pap | bootstrap.servers = [kafka:9092] 23:16:08 policy-pap | check.crcs = true 23:16:08 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:08 policy-pap | client.id = consumer-policy-pap-2 23:16:08 policy-pap | client.rack = 23:16:08 policy-pap | connections.max.idle.ms = 540000 23:16:08 policy-pap | default.api.timeout.ms = 60000 23:16:08 policy-pap | enable.auto.commit = true 23:16:08 policy-pap | exclude.internal.topics = true 23:16:08 policy-pap | fetch.max.bytes = 52428800 23:16:08 policy-pap | fetch.max.wait.ms = 500 23:16:08 policy-pap | fetch.min.bytes = 1 23:16:08 policy-pap | group.id = policy-pap 23:16:08 policy-pap | group.instance.id = null 23:16:08 policy-pap | heartbeat.interval.ms = 3000 23:16:08 policy-pap | interceptor.classes = [] 23:16:08 policy-pap | internal.leave.group.on.close = true 23:16:08 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:08 policy-pap | isolation.level = read_uncommitted 23:16:08 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:08 policy-pap | max.partition.fetch.bytes = 1048576 23:16:08 policy-pap | max.poll.interval.ms = 300000 23:16:08 policy-pap | max.poll.records = 500 23:16:08 policy-pap | metadata.max.age.ms = 300000 23:16:08 policy-pap | metric.reporters = [] 23:16:08 policy-pap | metrics.num.samples = 2 23:16:08 policy-pap | metrics.recording.level = INFO 23:16:08 policy-pap | metrics.sample.window.ms = 30000 23:16:08 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:08 policy-pap | receive.buffer.bytes = 65536 23:16:08 policy-pap | reconnect.backoff.max.ms = 1000 23:16:08 policy-pap | reconnect.backoff.ms = 50 23:16:08 policy-pap | request.timeout.ms = 30000 23:16:08 policy-pap | retry.backoff.ms = 100 23:16:08 policy-pap | sasl.client.callback.handler.class = null 23:16:08 policy-pap | sasl.jaas.config = null 23:16:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:08 policy-pap | sasl.kerberos.service.name = null 23:16:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:08 policy-pap | sasl.login.callback.handler.class = null 23:16:08 policy-pap | sasl.login.class = null 23:16:08 policy-pap | sasl.login.connect.timeout.ms = null 23:16:08 policy-pap | sasl.login.read.timeout.ms = null 23:16:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:08 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:08 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:08 policy-pap | sasl.mechanism = GSSAPI 23:16:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:08 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:08 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:08 policy-pap | security.protocol = PLAINTEXT 23:16:08 policy-pap | security.providers = null 23:16:08 policy-pap | send.buffer.bytes = 131072 23:16:08 policy-pap | session.timeout.ms = 45000 23:16:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:08 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:08 policy-pap | ssl.cipher.suites = null 23:16:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:08 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:08 policy-pap | ssl.engine.factory.class = null 23:16:08 policy-pap | ssl.key.password = null 23:16:08 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:08 policy-pap | ssl.keystore.certificate.chain = null 23:16:08 policy-pap | ssl.keystore.key = null 23:16:08 policy-pap | ssl.keystore.location = null 23:16:08 policy-pap | ssl.keystore.password = null 23:16:08 policy-pap | ssl.keystore.type = JKS 23:16:08 policy-pap | ssl.protocol = TLSv1.3 23:16:08 policy-pap | ssl.provider = null 23:16:08 policy-pap | ssl.secure.random.implementation = null 23:16:08 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:08 policy-pap | ssl.truststore.certificates = null 23:16:08 policy-pap | ssl.truststore.location = null 23:16:08 policy-pap | ssl.truststore.password = null 23:16:08 policy-pap | ssl.truststore.type = JKS 23:16:08 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:08 policy-pap | 23:16:08 policy-pap | [2024-06-06T23:13:59.780+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:08 policy-pap | [2024-06-06T23:13:59.781+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:08 policy-pap | [2024-06-06T23:13:59.781+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1717715639780 23:16:08 policy-pap | [2024-06-06T23:13:59.781+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:08 policy-pap | [2024-06-06T23:14:00.086+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:16:08 policy-pap | [2024-06-06T23:14:00.237+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:08 policy-pap | [2024-06-06T23:14:00.467+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@9825465, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@36cf6377, org.springframework.security.web.context.SecurityContextHolderFilter@64887fbc, org.springframework.security.web.header.HeaderWriterFilter@f4d391c, org.springframework.security.web.authentication.logout.LogoutFilter@24d0c6a4, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@29dfc68f, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@1f6d7e7c, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@7836c79, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@2befb16f, org.springframework.security.web.access.ExceptionTranslationFilter@3f2ef402, org.springframework.security.web.access.intercept.AuthorizationFilter@1870b9b8] 23:16:08 policy-pap | [2024-06-06T23:14:01.248+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:08 policy-pap | [2024-06-06T23:14:01.351+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:08 policy-pap | [2024-06-06T23:14:01.393+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 23:16:08 policy-pap | [2024-06-06T23:14:01.410+00:00|INFO|ServiceManager|main] Policy PAP starting 23:16:08 policy-pap | [2024-06-06T23:14:01.410+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:16:08 policy-pap | [2024-06-06T23:14:01.411+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:16:08 policy-pap | [2024-06-06T23:14:01.412+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:16:08 policy-pap | [2024-06-06T23:14:01.412+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:16:08 policy-pap | [2024-06-06T23:14:01.412+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:16:08 policy-pap | [2024-06-06T23:14:01.412+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:16:08 policy-pap | [2024-06-06T23:14:01.415+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c684b99a-d7a2-4465-b216-cb59f20e796f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5cc87de4 23:16:08 policy-pap | [2024-06-06T23:14:01.427+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c684b99a-d7a2-4465-b216-cb59f20e796f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:08 policy-pap | [2024-06-06T23:14:01.428+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:08 policy-pap | allow.auto.create.topics = true 23:16:08 policy-pap | auto.commit.interval.ms = 5000 23:16:08 policy-pap | auto.include.jmx.reporter = true 23:16:08 policy-pap | auto.offset.reset = latest 23:16:08 policy-pap | bootstrap.servers = [kafka:9092] 23:16:08 policy-pap | check.crcs = true 23:16:08 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:08 policy-pap | client.id = consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3 23:16:08 policy-pap | client.rack = 23:16:08 policy-pap | connections.max.idle.ms = 540000 23:16:08 policy-pap | default.api.timeout.ms = 60000 23:16:08 policy-pap | enable.auto.commit = true 23:16:08 policy-pap | exclude.internal.topics = true 23:16:08 policy-pap | fetch.max.bytes = 52428800 23:16:08 policy-pap | fetch.max.wait.ms = 500 23:16:08 policy-pap | fetch.min.bytes = 1 23:16:08 policy-pap | group.id = c684b99a-d7a2-4465-b216-cb59f20e796f 23:16:08 policy-pap | group.instance.id = null 23:16:08 policy-pap | heartbeat.interval.ms = 3000 23:16:08 policy-pap | interceptor.classes = [] 23:16:08 policy-pap | internal.leave.group.on.close = true 23:16:08 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:08 policy-pap | isolation.level = read_uncommitted 23:16:08 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:08 policy-pap | max.partition.fetch.bytes = 1048576 23:16:08 policy-pap | max.poll.interval.ms = 300000 23:16:08 policy-pap | max.poll.records = 500 23:16:08 policy-pap | metadata.max.age.ms = 300000 23:16:08 policy-pap | metric.reporters = [] 23:16:08 policy-pap | metrics.num.samples = 2 23:16:08 policy-pap | metrics.recording.level = INFO 23:16:08 policy-pap | metrics.sample.window.ms = 30000 23:16:08 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:08 policy-pap | receive.buffer.bytes = 65536 23:16:08 policy-pap | reconnect.backoff.max.ms = 1000 23:16:08 policy-pap | reconnect.backoff.ms = 50 23:16:08 policy-pap | request.timeout.ms = 30000 23:16:08 policy-pap | retry.backoff.ms = 100 23:16:08 policy-pap | sasl.client.callback.handler.class = null 23:16:08 policy-pap | sasl.jaas.config = null 23:16:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:08 policy-pap | sasl.kerberos.service.name = null 23:16:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:08 policy-pap | sasl.login.callback.handler.class = null 23:16:08 policy-pap | sasl.login.class = null 23:16:08 policy-pap | sasl.login.connect.timeout.ms = null 23:16:08 policy-pap | sasl.login.read.timeout.ms = null 23:16:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:08 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:08 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:08 policy-pap | sasl.mechanism = GSSAPI 23:16:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:08 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:08 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:08 policy-pap | security.protocol = PLAINTEXT 23:16:08 policy-pap | security.providers = null 23:16:08 policy-pap | send.buffer.bytes = 131072 23:16:08 policy-pap | session.timeout.ms = 45000 23:16:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:08 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:08 policy-pap | ssl.cipher.suites = null 23:16:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:08 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:08 policy-pap | ssl.engine.factory.class = null 23:16:08 policy-pap | ssl.key.password = null 23:16:08 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:08 policy-pap | ssl.keystore.certificate.chain = null 23:16:08 policy-pap | ssl.keystore.key = null 23:16:08 policy-pap | ssl.keystore.location = null 23:16:08 policy-pap | ssl.keystore.password = null 23:16:08 policy-pap | ssl.keystore.type = JKS 23:16:08 policy-pap | ssl.protocol = TLSv1.3 23:16:08 policy-pap | ssl.provider = null 23:16:08 policy-pap | ssl.secure.random.implementation = null 23:16:08 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:08 policy-pap | ssl.truststore.certificates = null 23:16:08 policy-pap | ssl.truststore.location = null 23:16:08 policy-pap | ssl.truststore.password = null 23:16:08 policy-pap | ssl.truststore.type = JKS 23:16:08 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:08 policy-pap | 23:16:08 policy-pap | [2024-06-06T23:14:01.434+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:08 policy-pap | [2024-06-06T23:14:01.434+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:08 policy-pap | [2024-06-06T23:14:01.434+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1717715641434 23:16:08 policy-pap | [2024-06-06T23:14:01.435+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] Subscribed to topic(s): policy-pdp-pap 23:16:08 policy-pap | [2024-06-06T23:14:01.435+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:16:08 policy-pap | [2024-06-06T23:14:01.435+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=32efee49-262a-4c0a-ab65-8eabcf2dcea0, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@162e29a1 23:16:08 policy-pap | [2024-06-06T23:14:01.435+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=32efee49-262a-4c0a-ab65-8eabcf2dcea0, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:08 policy-pap | [2024-06-06T23:14:01.436+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:08 policy-pap | allow.auto.create.topics = true 23:16:08 policy-pap | auto.commit.interval.ms = 5000 23:16:08 policy-pap | auto.include.jmx.reporter = true 23:16:08 policy-pap | auto.offset.reset = latest 23:16:08 policy-pap | bootstrap.servers = [kafka:9092] 23:16:08 policy-pap | check.crcs = true 23:16:08 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:08 policy-pap | client.id = consumer-policy-pap-4 23:16:08 policy-pap | client.rack = 23:16:08 policy-pap | connections.max.idle.ms = 540000 23:16:08 policy-pap | default.api.timeout.ms = 60000 23:16:08 policy-pap | enable.auto.commit = true 23:16:08 policy-pap | exclude.internal.topics = true 23:16:08 policy-pap | fetch.max.bytes = 52428800 23:16:08 policy-pap | fetch.max.wait.ms = 500 23:16:08 policy-pap | fetch.min.bytes = 1 23:16:08 policy-pap | group.id = policy-pap 23:16:08 policy-pap | group.instance.id = null 23:16:08 policy-pap | heartbeat.interval.ms = 3000 23:16:08 policy-pap | interceptor.classes = [] 23:16:08 policy-pap | internal.leave.group.on.close = true 23:16:08 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:08 policy-pap | isolation.level = read_uncommitted 23:16:08 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:08 policy-pap | max.partition.fetch.bytes = 1048576 23:16:08 policy-pap | max.poll.interval.ms = 300000 23:16:08 policy-pap | max.poll.records = 500 23:16:08 policy-pap | metadata.max.age.ms = 300000 23:16:08 policy-pap | metric.reporters = [] 23:16:08 policy-pap | metrics.num.samples = 2 23:16:08 policy-pap | metrics.recording.level = INFO 23:16:08 policy-pap | metrics.sample.window.ms = 30000 23:16:08 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:08 policy-pap | receive.buffer.bytes = 65536 23:16:08 policy-pap | reconnect.backoff.max.ms = 1000 23:16:08 policy-pap | reconnect.backoff.ms = 50 23:16:08 policy-pap | request.timeout.ms = 30000 23:16:08 policy-pap | retry.backoff.ms = 100 23:16:08 policy-pap | sasl.client.callback.handler.class = null 23:16:08 policy-pap | sasl.jaas.config = null 23:16:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:08 policy-pap | sasl.kerberos.service.name = null 23:16:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:08 policy-pap | sasl.login.callback.handler.class = null 23:16:08 policy-pap | sasl.login.class = null 23:16:08 policy-pap | sasl.login.connect.timeout.ms = null 23:16:08 policy-pap | sasl.login.read.timeout.ms = null 23:16:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:08 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:08 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:08 policy-pap | sasl.mechanism = GSSAPI 23:16:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:08 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:08 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:08 policy-pap | security.protocol = PLAINTEXT 23:16:08 policy-pap | security.providers = null 23:16:08 policy-pap | send.buffer.bytes = 131072 23:16:08 policy-pap | session.timeout.ms = 45000 23:16:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:08 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:08 policy-pap | ssl.cipher.suites = null 23:16:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:08 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:08 policy-pap | ssl.engine.factory.class = null 23:16:08 policy-pap | ssl.key.password = null 23:16:08 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:08 policy-pap | ssl.keystore.certificate.chain = null 23:16:08 policy-pap | ssl.keystore.key = null 23:16:08 policy-pap | ssl.keystore.location = null 23:16:08 policy-pap | ssl.keystore.password = null 23:16:08 policy-pap | ssl.keystore.type = JKS 23:16:08 policy-pap | ssl.protocol = TLSv1.3 23:16:08 policy-pap | ssl.provider = null 23:16:08 policy-pap | ssl.secure.random.implementation = null 23:16:08 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:08 policy-pap | ssl.truststore.certificates = null 23:16:08 policy-pap | ssl.truststore.location = null 23:16:08 policy-pap | ssl.truststore.password = null 23:16:08 policy-pap | ssl.truststore.type = JKS 23:16:08 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:08 policy-pap | 23:16:08 policy-pap | [2024-06-06T23:14:01.441+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:08 policy-pap | [2024-06-06T23:14:01.441+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:08 policy-pap | [2024-06-06T23:14:01.441+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1717715641441 23:16:08 policy-pap | [2024-06-06T23:14:01.441+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:08 policy-pap | [2024-06-06T23:14:01.442+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:16:08 policy-pap | [2024-06-06T23:14:01.442+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=32efee49-262a-4c0a-ab65-8eabcf2dcea0, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:08 policy-pap | [2024-06-06T23:14:01.442+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c684b99a-d7a2-4465-b216-cb59f20e796f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:08 policy-pap | [2024-06-06T23:14:01.442+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=88467773-d7b9-4003-ae6f-d076f7798cf6, alive=false, publisher=null]]: starting 23:16:08 policy-pap | [2024-06-06T23:14:01.459+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:08 policy-pap | acks = -1 23:16:08 policy-pap | auto.include.jmx.reporter = true 23:16:08 policy-pap | batch.size = 16384 23:16:08 policy-pap | bootstrap.servers = [kafka:9092] 23:16:08 policy-pap | buffer.memory = 33554432 23:16:08 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:08 policy-pap | client.id = producer-1 23:16:08 policy-pap | compression.type = none 23:16:08 policy-pap | connections.max.idle.ms = 540000 23:16:08 policy-pap | delivery.timeout.ms = 120000 23:16:08 policy-pap | enable.idempotence = true 23:16:08 policy-pap | interceptor.classes = [] 23:16:08 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:08 policy-pap | linger.ms = 0 23:16:08 policy-pap | max.block.ms = 60000 23:16:08 policy-pap | max.in.flight.requests.per.connection = 5 23:16:08 policy-pap | max.request.size = 1048576 23:16:08 policy-pap | metadata.max.age.ms = 300000 23:16:08 policy-pap | metadata.max.idle.ms = 300000 23:16:08 policy-pap | metric.reporters = [] 23:16:08 policy-pap | metrics.num.samples = 2 23:16:08 policy-pap | metrics.recording.level = INFO 23:16:08 policy-pap | metrics.sample.window.ms = 30000 23:16:08 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:08 policy-pap | partitioner.availability.timeout.ms = 0 23:16:08 policy-pap | partitioner.class = null 23:16:08 policy-pap | partitioner.ignore.keys = false 23:16:08 policy-pap | receive.buffer.bytes = 32768 23:16:08 policy-pap | reconnect.backoff.max.ms = 1000 23:16:08 policy-pap | reconnect.backoff.ms = 50 23:16:08 policy-pap | request.timeout.ms = 30000 23:16:08 policy-pap | retries = 2147483647 23:16:08 policy-pap | retry.backoff.ms = 100 23:16:08 policy-pap | sasl.client.callback.handler.class = null 23:16:08 policy-pap | sasl.jaas.config = null 23:16:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:08 policy-pap | sasl.kerberos.service.name = null 23:16:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:08 policy-pap | sasl.login.callback.handler.class = null 23:16:08 policy-pap | sasl.login.class = null 23:16:08 policy-pap | sasl.login.connect.timeout.ms = null 23:16:08 policy-pap | sasl.login.read.timeout.ms = null 23:16:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:08 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:08 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:08 policy-pap | sasl.mechanism = GSSAPI 23:16:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:08 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:08 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:08 policy-pap | security.protocol = PLAINTEXT 23:16:08 policy-pap | security.providers = null 23:16:08 policy-pap | send.buffer.bytes = 131072 23:16:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:08 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:08 policy-pap | ssl.cipher.suites = null 23:16:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:08 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:08 policy-pap | ssl.engine.factory.class = null 23:16:08 policy-pap | ssl.key.password = null 23:16:08 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:08 policy-pap | ssl.keystore.certificate.chain = null 23:16:08 policy-pap | ssl.keystore.key = null 23:16:08 policy-pap | ssl.keystore.location = null 23:16:08 policy-pap | ssl.keystore.password = null 23:16:08 policy-pap | ssl.keystore.type = JKS 23:16:08 policy-pap | ssl.protocol = TLSv1.3 23:16:08 policy-pap | ssl.provider = null 23:16:08 policy-pap | ssl.secure.random.implementation = null 23:16:08 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:08 policy-pap | ssl.truststore.certificates = null 23:16:08 policy-pap | ssl.truststore.location = null 23:16:08 policy-pap | ssl.truststore.password = null 23:16:08 policy-pap | ssl.truststore.type = JKS 23:16:08 policy-pap | transaction.timeout.ms = 60000 23:16:08 policy-pap | transactional.id = null 23:16:08 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:08 policy-pap | 23:16:08 policy-pap | [2024-06-06T23:14:01.471+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:08 policy-pap | [2024-06-06T23:14:01.494+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:08 policy-pap | [2024-06-06T23:14:01.494+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:08 policy-pap | [2024-06-06T23:14:01.494+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1717715641494 23:16:08 policy-pap | [2024-06-06T23:14:01.494+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=88467773-d7b9-4003-ae6f-d076f7798cf6, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:08 policy-pap | [2024-06-06T23:14:01.495+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e8141081-41c2-4f31-a8ac-07da5af643ec, alive=false, publisher=null]]: starting 23:16:08 policy-pap | [2024-06-06T23:14:01.495+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:08 policy-pap | acks = -1 23:16:08 policy-pap | auto.include.jmx.reporter = true 23:16:08 policy-pap | batch.size = 16384 23:16:08 policy-pap | bootstrap.servers = [kafka:9092] 23:16:08 policy-pap | buffer.memory = 33554432 23:16:08 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:08 policy-pap | client.id = producer-2 23:16:08 policy-pap | compression.type = none 23:16:08 policy-pap | connections.max.idle.ms = 540000 23:16:08 policy-pap | delivery.timeout.ms = 120000 23:16:08 policy-pap | enable.idempotence = true 23:16:08 policy-pap | interceptor.classes = [] 23:16:08 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:08 policy-pap | linger.ms = 0 23:16:08 policy-pap | max.block.ms = 60000 23:16:08 policy-pap | max.in.flight.requests.per.connection = 5 23:16:08 policy-pap | max.request.size = 1048576 23:16:08 policy-pap | metadata.max.age.ms = 300000 23:16:08 policy-pap | metadata.max.idle.ms = 300000 23:16:08 policy-pap | metric.reporters = [] 23:16:08 policy-pap | metrics.num.samples = 2 23:16:08 policy-pap | metrics.recording.level = INFO 23:16:08 policy-pap | metrics.sample.window.ms = 30000 23:16:08 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:08 policy-pap | partitioner.availability.timeout.ms = 0 23:16:08 policy-pap | partitioner.class = null 23:16:08 policy-pap | partitioner.ignore.keys = false 23:16:08 policy-pap | receive.buffer.bytes = 32768 23:16:08 policy-pap | reconnect.backoff.max.ms = 1000 23:16:08 policy-pap | reconnect.backoff.ms = 50 23:16:08 policy-pap | request.timeout.ms = 30000 23:16:08 policy-pap | retries = 2147483647 23:16:08 policy-pap | retry.backoff.ms = 100 23:16:08 policy-pap | sasl.client.callback.handler.class = null 23:16:08 policy-pap | sasl.jaas.config = null 23:16:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:08 policy-pap | sasl.kerberos.service.name = null 23:16:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:08 policy-pap | sasl.login.callback.handler.class = null 23:16:08 policy-pap | sasl.login.class = null 23:16:08 policy-pap | sasl.login.connect.timeout.ms = null 23:16:08 policy-pap | sasl.login.read.timeout.ms = null 23:16:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:08 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:08 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:08 policy-pap | sasl.mechanism = GSSAPI 23:16:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:08 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:08 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:08 policy-pap | security.protocol = PLAINTEXT 23:16:08 policy-pap | security.providers = null 23:16:08 policy-pap | send.buffer.bytes = 131072 23:16:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:08 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:08 policy-pap | ssl.cipher.suites = null 23:16:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:08 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:08 policy-pap | ssl.engine.factory.class = null 23:16:08 policy-pap | ssl.key.password = null 23:16:08 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:08 policy-pap | ssl.keystore.certificate.chain = null 23:16:08 policy-pap | ssl.keystore.key = null 23:16:08 policy-pap | ssl.keystore.location = null 23:16:08 policy-pap | ssl.keystore.password = null 23:16:08 policy-pap | ssl.keystore.type = JKS 23:16:08 policy-pap | ssl.protocol = TLSv1.3 23:16:08 policy-pap | ssl.provider = null 23:16:08 policy-pap | ssl.secure.random.implementation = null 23:16:08 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:08 policy-pap | ssl.truststore.certificates = null 23:16:08 policy-pap | ssl.truststore.location = null 23:16:08 policy-pap | ssl.truststore.password = null 23:16:08 policy-pap | ssl.truststore.type = JKS 23:16:08 policy-pap | transaction.timeout.ms = 60000 23:16:08 policy-pap | transactional.id = null 23:16:08 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:08 policy-pap | 23:16:08 policy-pap | [2024-06-06T23:14:01.496+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:16:08 policy-pap | [2024-06-06T23:14:01.498+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:08 policy-pap | [2024-06-06T23:14:01.499+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:08 policy-pap | [2024-06-06T23:14:01.499+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1717715641498 23:16:08 policy-pap | [2024-06-06T23:14:01.499+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e8141081-41c2-4f31-a8ac-07da5af643ec, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:08 policy-pap | [2024-06-06T23:14:01.499+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:16:08 policy-pap | [2024-06-06T23:14:01.499+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:16:08 policy-pap | [2024-06-06T23:14:01.501+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:16:08 policy-pap | [2024-06-06T23:14:01.501+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:16:08 policy-pap | [2024-06-06T23:14:01.502+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:16:08 policy-pap | [2024-06-06T23:14:01.504+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:16:08 policy-pap | [2024-06-06T23:14:01.504+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:16:08 policy-pap | [2024-06-06T23:14:01.504+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:16:08 policy-pap | [2024-06-06T23:14:01.505+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:16:08 policy-pap | [2024-06-06T23:14:01.506+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:16:08 policy-pap | [2024-06-06T23:14:01.512+00:00|INFO|ServiceManager|main] Policy PAP started 23:16:08 policy-pap | [2024-06-06T23:14:01.514+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.031 seconds (process running for 10.604) 23:16:08 policy-pap | [2024-06-06T23:14:01.859+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: g5nVYXh1RN-GfsPixxR24w 23:16:08 policy-pap | [2024-06-06T23:14:01.862+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: g5nVYXh1RN-GfsPixxR24w 23:16:08 policy-pap | [2024-06-06T23:14:01.866+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:08 policy-pap | [2024-06-06T23:14:01.866+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: g5nVYXh1RN-GfsPixxR24w 23:16:08 policy-pap | [2024-06-06T23:14:01.922+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:08 policy-pap | [2024-06-06T23:14:01.922+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] Cluster ID: g5nVYXh1RN-GfsPixxR24w 23:16:08 policy-pap | [2024-06-06T23:14:01.991+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 23:16:08 policy-pap | [2024-06-06T23:14:01.992+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 23:16:08 policy-pap | [2024-06-06T23:14:01.995+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:08 policy-pap | [2024-06-06T23:14:02.055+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:08 policy-pap | [2024-06-06T23:14:02.159+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:08 policy-pap | [2024-06-06T23:14:02.167+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:08 policy-pap | [2024-06-06T23:14:02.990+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:08 policy-pap | [2024-06-06T23:14:02.997+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] (Re-)joining group 23:16:08 policy-pap | [2024-06-06T23:14:03.010+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:08 policy-pap | [2024-06-06T23:14:03.011+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:08 policy-pap | [2024-06-06T23:14:03.044+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] Request joining group due to: need to re-join with the given member-id: consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3-91acf397-7b46-4d0e-b697-59e0aa395f7f 23:16:08 policy-pap | [2024-06-06T23:14:03.044+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-163a505b-7df5-4d24-8282-0d4b1fe2e9f5 23:16:08 policy-pap | [2024-06-06T23:14:03.044+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:08 policy-pap | [2024-06-06T23:14:03.044+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:08 policy-pap | [2024-06-06T23:14:03.044+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:08 policy-pap | [2024-06-06T23:14:03.044+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] (Re-)joining group 23:16:08 policy-pap | [2024-06-06T23:14:06.068+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-163a505b-7df5-4d24-8282-0d4b1fe2e9f5', protocol='range'} 23:16:08 policy-pap | [2024-06-06T23:14:06.078+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-163a505b-7df5-4d24-8282-0d4b1fe2e9f5=Assignment(partitions=[policy-pdp-pap-0])} 23:16:08 policy-pap | [2024-06-06T23:14:06.080+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] Successfully joined group with generation Generation{generationId=1, memberId='consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3-91acf397-7b46-4d0e-b697-59e0aa395f7f', protocol='range'} 23:16:08 policy-pap | [2024-06-06T23:14:06.080+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] Finished assignment for group at generation 1: {consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3-91acf397-7b46-4d0e-b697-59e0aa395f7f=Assignment(partitions=[policy-pdp-pap-0])} 23:16:08 policy-pap | [2024-06-06T23:14:06.109+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] Successfully synced group in generation Generation{generationId=1, memberId='consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3-91acf397-7b46-4d0e-b697-59e0aa395f7f', protocol='range'} 23:16:08 policy-pap | [2024-06-06T23:14:06.109+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-163a505b-7df5-4d24-8282-0d4b1fe2e9f5', protocol='range'} 23:16:08 policy-pap | [2024-06-06T23:14:06.110+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:08 policy-pap | [2024-06-06T23:14:06.111+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:08 policy-pap | [2024-06-06T23:14:06.113+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] Adding newly assigned partitions: policy-pdp-pap-0 23:16:08 policy-pap | [2024-06-06T23:14:06.113+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:16:08 policy-pap | [2024-06-06T23:14:06.135+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] Found no committed offset for partition policy-pdp-pap-0 23:16:08 policy-pap | [2024-06-06T23:14:06.135+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:16:08 policy-pap | [2024-06-06T23:14:06.159+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c684b99a-d7a2-4465-b216-cb59f20e796f-3, groupId=c684b99a-d7a2-4465-b216-cb59f20e796f] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:08 policy-pap | [2024-06-06T23:14:06.159+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:08 policy-pap | [2024-06-06T23:14:23.309+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 23:16:08 policy-pap | [] 23:16:08 policy-pap | [2024-06-06T23:14:23.309+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:08 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"25422dbf-4d29-4540-8d18-0942903ce7f5","timestampMs":1717715663272,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup"} 23:16:08 policy-pap | [2024-06-06T23:14:23.317+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:08 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"25422dbf-4d29-4540-8d18-0942903ce7f5","timestampMs":1717715663272,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup"} 23:16:08 policy-pap | [2024-06-06T23:14:23.319+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:08 policy-pap | [2024-06-06T23:14:23.396+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate starting 23:16:08 policy-pap | [2024-06-06T23:14:23.396+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate starting listener 23:16:08 policy-pap | [2024-06-06T23:14:23.396+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate starting timer 23:16:08 policy-pap | [2024-06-06T23:14:23.397+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=df3ff21a-8e84-4d20-acf6-d526ebeec034, expireMs=1717715693397] 23:16:08 policy-pap | [2024-06-06T23:14:23.398+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate starting enqueue 23:16:08 policy-pap | [2024-06-06T23:14:23.398+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate started 23:16:08 policy-pap | [2024-06-06T23:14:23.398+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=df3ff21a-8e84-4d20-acf6-d526ebeec034, expireMs=1717715693397] 23:16:08 policy-pap | [2024-06-06T23:14:23.405+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:08 policy-pap | {"source":"pap-572f2fc8-c911-4002-a0f5-f415c6c53859","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"df3ff21a-8e84-4d20-acf6-d526ebeec034","timestampMs":1717715663374,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-pap | [2024-06-06T23:14:23.456+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:08 policy-pap | {"source":"pap-572f2fc8-c911-4002-a0f5-f415c6c53859","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"df3ff21a-8e84-4d20-acf6-d526ebeec034","timestampMs":1717715663374,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-pap | [2024-06-06T23:14:23.457+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:08 policy-pap | [2024-06-06T23:14:23.457+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:08 policy-pap | {"source":"pap-572f2fc8-c911-4002-a0f5-f415c6c53859","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"df3ff21a-8e84-4d20-acf6-d526ebeec034","timestampMs":1717715663374,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-pap | [2024-06-06T23:14:23.458+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:08 policy-pap | [2024-06-06T23:14:23.489+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:08 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c511d065-9252-4527-8656-96e116e0970b","timestampMs":1717715663470,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup"} 23:16:08 policy-pap | [2024-06-06T23:14:23.490+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:08 policy-pap | [2024-06-06T23:14:23.490+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:08 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"df3ff21a-8e84-4d20-acf6-d526ebeec034","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"54dc32fc-a685-4696-85c9-4db9e963fd29","timestampMs":1717715663471,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-pap | [2024-06-06T23:14:23.491+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:08 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c511d065-9252-4527-8656-96e116e0970b","timestampMs":1717715663470,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup"} 23:16:08 policy-pap | [2024-06-06T23:14:23.492+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate stopping 23:16:08 policy-pap | [2024-06-06T23:14:23.492+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate stopping enqueue 23:16:08 policy-pap | [2024-06-06T23:14:23.492+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate stopping timer 23:16:08 policy-pap | [2024-06-06T23:14:23.492+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=df3ff21a-8e84-4d20-acf6-d526ebeec034, expireMs=1717715693397] 23:16:08 policy-pap | [2024-06-06T23:14:23.492+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate stopping listener 23:16:08 policy-pap | [2024-06-06T23:14:23.492+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate stopped 23:16:08 policy-pap | [2024-06-06T23:14:23.499+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate successful 23:16:08 policy-pap | [2024-06-06T23:14:23.499+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 start publishing next request 23:16:08 policy-pap | [2024-06-06T23:14:23.499+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpStateChange starting 23:16:08 policy-pap | [2024-06-06T23:14:23.499+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpStateChange starting listener 23:16:08 policy-pap | [2024-06-06T23:14:23.499+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpStateChange starting timer 23:16:08 policy-pap | [2024-06-06T23:14:23.499+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=c8104045-1dc2-4641-bf61-d87c07469005, expireMs=1717715693499] 23:16:08 policy-pap | [2024-06-06T23:14:23.499+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpStateChange starting enqueue 23:16:08 policy-pap | [2024-06-06T23:14:23.500+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=c8104045-1dc2-4641-bf61-d87c07469005, expireMs=1717715693499] 23:16:08 policy-pap | [2024-06-06T23:14:23.500+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:08 policy-pap | {"source":"pap-572f2fc8-c911-4002-a0f5-f415c6c53859","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c8104045-1dc2-4641-bf61-d87c07469005","timestampMs":1717715663375,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-pap | [2024-06-06T23:14:23.501+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpStateChange started 23:16:08 policy-pap | [2024-06-06T23:14:23.548+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:08 policy-pap | {"source":"pap-572f2fc8-c911-4002-a0f5-f415c6c53859","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c8104045-1dc2-4641-bf61-d87c07469005","timestampMs":1717715663375,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-pap | [2024-06-06T23:14:23.548+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:16:08 policy-pap | [2024-06-06T23:14:23.552+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:08 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c8104045-1dc2-4641-bf61-d87c07469005","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"6e127b25-1551-4582-bde8-d75fc27d7b4f","timestampMs":1717715663513,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-pap | [2024-06-06T23:14:23.553+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:08 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"df3ff21a-8e84-4d20-acf6-d526ebeec034","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"54dc32fc-a685-4696-85c9-4db9e963fd29","timestampMs":1717715663471,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-pap | [2024-06-06T23:14:23.553+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id df3ff21a-8e84-4d20-acf6-d526ebeec034 23:16:08 policy-pap | [2024-06-06T23:14:23.553+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpStateChange stopping 23:16:08 policy-pap | [2024-06-06T23:14:23.553+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpStateChange stopping enqueue 23:16:08 policy-pap | [2024-06-06T23:14:23.553+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpStateChange stopping timer 23:16:08 policy-pap | [2024-06-06T23:14:23.553+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=c8104045-1dc2-4641-bf61-d87c07469005, expireMs=1717715693499] 23:16:08 policy-pap | [2024-06-06T23:14:23.554+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpStateChange stopping listener 23:16:08 policy-pap | [2024-06-06T23:14:23.554+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpStateChange stopped 23:16:08 policy-pap | [2024-06-06T23:14:23.554+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpStateChange successful 23:16:08 policy-pap | [2024-06-06T23:14:23.554+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 start publishing next request 23:16:08 policy-pap | [2024-06-06T23:14:23.554+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate starting 23:16:08 policy-pap | [2024-06-06T23:14:23.554+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate starting listener 23:16:08 policy-pap | [2024-06-06T23:14:23.554+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate starting timer 23:16:08 policy-pap | [2024-06-06T23:14:23.554+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=8b77f616-647f-44f7-9711-58d6973c2939, expireMs=1717715693554] 23:16:08 policy-pap | [2024-06-06T23:14:23.554+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate starting enqueue 23:16:08 policy-pap | [2024-06-06T23:14:23.554+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate started 23:16:08 policy-pap | [2024-06-06T23:14:23.558+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:08 policy-pap | {"source":"pap-572f2fc8-c911-4002-a0f5-f415c6c53859","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8b77f616-647f-44f7-9711-58d6973c2939","timestampMs":1717715663532,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-pap | [2024-06-06T23:14:23.559+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:08 policy-pap | {"source":"pap-572f2fc8-c911-4002-a0f5-f415c6c53859","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c8104045-1dc2-4641-bf61-d87c07469005","timestampMs":1717715663375,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-pap | [2024-06-06T23:14:23.559+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:16:08 policy-pap | [2024-06-06T23:14:23.569+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:08 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c8104045-1dc2-4641-bf61-d87c07469005","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"6e127b25-1551-4582-bde8-d75fc27d7b4f","timestampMs":1717715663513,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-pap | [2024-06-06T23:14:23.571+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c8104045-1dc2-4641-bf61-d87c07469005 23:16:08 policy-pap | [2024-06-06T23:14:23.572+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:08 policy-pap | {"source":"pap-572f2fc8-c911-4002-a0f5-f415c6c53859","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8b77f616-647f-44f7-9711-58d6973c2939","timestampMs":1717715663532,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-pap | [2024-06-06T23:14:23.573+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:08 policy-pap | [2024-06-06T23:14:23.600+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:08 policy-pap | {"source":"pap-572f2fc8-c911-4002-a0f5-f415c6c53859","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8b77f616-647f-44f7-9711-58d6973c2939","timestampMs":1717715663532,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-pap | [2024-06-06T23:14:23.600+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:08 policy-pap | [2024-06-06T23:14:23.602+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:08 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8b77f616-647f-44f7-9711-58d6973c2939","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"b7736e57-6d5b-43a1-a903-5dfa48d6b475","timestampMs":1717715663573,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-pap | [2024-06-06T23:14:23.602+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate stopping 23:16:08 policy-pap | [2024-06-06T23:14:23.602+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate stopping enqueue 23:16:08 policy-pap | [2024-06-06T23:14:23.602+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate stopping timer 23:16:08 policy-pap | [2024-06-06T23:14:23.602+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=8b77f616-647f-44f7-9711-58d6973c2939, expireMs=1717715693554] 23:16:08 policy-pap | [2024-06-06T23:14:23.602+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate stopping listener 23:16:08 policy-pap | [2024-06-06T23:14:23.603+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate stopped 23:16:08 policy-pap | [2024-06-06T23:14:23.605+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:08 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8b77f616-647f-44f7-9711-58d6973c2939","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"b7736e57-6d5b-43a1-a903-5dfa48d6b475","timestampMs":1717715663573,"name":"apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:08 policy-pap | [2024-06-06T23:14:23.605+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8b77f616-647f-44f7-9711-58d6973c2939 23:16:08 policy-pap | [2024-06-06T23:14:23.607+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 PdpUpdate successful 23:16:08 policy-pap | [2024-06-06T23:14:23.607+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-cb88389f-82a4-42a2-91cf-61076a7fa6d3 has no more requests 23:16:08 policy-pap | [2024-06-06T23:14:38.724+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:08 policy-pap | [2024-06-06T23:14:38.725+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 23:16:08 policy-pap | [2024-06-06T23:14:38.726+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 1 ms 23:16:08 policy-pap | [2024-06-06T23:14:53.398+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=df3ff21a-8e84-4d20-acf6-d526ebeec034, expireMs=1717715693397] 23:16:08 policy-pap | [2024-06-06T23:14:53.499+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=c8104045-1dc2-4641-bf61-d87c07469005, expireMs=1717715693499] 23:16:08 policy-pap | [2024-06-06T23:14:59.145+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. 23:16:08 policy-pap | [2024-06-06T23:14:59.208+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:08 policy-pap | [2024-06-06T23:14:59.219+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:08 policy-pap | [2024-06-06T23:14:59.221+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:08 policy-pap | [2024-06-06T23:14:59.606+00:00|INFO|SessionData|http-nio-6969-exec-8] unknown group testGroup 23:16:08 policy-pap | [2024-06-06T23:15:00.148+00:00|INFO|SessionData|http-nio-6969-exec-8] create cached group testGroup 23:16:08 policy-pap | [2024-06-06T23:15:00.149+00:00|INFO|SessionData|http-nio-6969-exec-8] creating DB group testGroup 23:16:08 policy-pap | [2024-06-06T23:15:00.645+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 23:16:08 policy-pap | [2024-06-06T23:15:00.882+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 23:16:08 policy-pap | [2024-06-06T23:15:00.988+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:16:08 policy-pap | [2024-06-06T23:15:00.989+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup 23:16:08 policy-pap | [2024-06-06T23:15:00.989+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup 23:16:08 policy-pap | [2024-06-06T23:15:01.009+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-06-06T23:15:00Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-06-06T23:15:00Z, user=policyadmin)] 23:16:08 policy-pap | [2024-06-06T23:15:01.679+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup 23:16:08 policy-pap | [2024-06-06T23:15:01.680+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:16:08 policy-pap | [2024-06-06T23:15:01.680+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy onap.restart.tca 1.0.0 23:16:08 policy-pap | [2024-06-06T23:15:01.680+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup 23:16:08 policy-pap | [2024-06-06T23:15:01.681+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup 23:16:08 policy-pap | [2024-06-06T23:15:01.690+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-06-06T23:15:01Z, user=policyadmin)] 23:16:08 policy-pap | [2024-06-06T23:15:02.054+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 23:16:08 policy-pap | [2024-06-06T23:15:02.054+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 23:16:08 policy-pap | [2024-06-06T23:15:02.054+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:16:08 policy-pap | [2024-06-06T23:15:02.054+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:16:08 policy-pap | [2024-06-06T23:15:02.054+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 23:16:08 policy-pap | [2024-06-06T23:15:02.054+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 23:16:08 policy-pap | [2024-06-06T23:15:02.062+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-06-06T23:15:02Z, user=policyadmin)] 23:16:08 policy-pap | [2024-06-06T23:15:02.581+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 23:16:08 policy-pap | [2024-06-06T23:15:02.583+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 23:16:08 policy-pap | [2024-06-06T23:16:01.507+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 23:16:08 =================================== 23:16:08 ======== Logs from prometheus ======== 23:16:08 prometheus | ts=2024-06-06T23:13:23.585Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d 23:16:08 prometheus | ts=2024-06-06T23:13:23.585Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.52.0, branch=HEAD, revision=879d80922a227c37df502e7315fad8ceb10a986d)" 23:16:08 prometheus | ts=2024-06-06T23:13:23.585Z caller=main.go:622 level=info build_context="(go=go1.22.3, platform=linux/amd64, user=root@1b4f4c206e41, date=20240508-21:56:43, tags=netgo,builtinassets,stringlabels)" 23:16:08 prometheus | ts=2024-06-06T23:13:23.585Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 23:16:08 prometheus | ts=2024-06-06T23:13:23.585Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 23:16:08 prometheus | ts=2024-06-06T23:13:23.585Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 23:16:08 prometheus | ts=2024-06-06T23:13:23.587Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 23:16:08 prometheus | ts=2024-06-06T23:13:23.587Z caller=main.go:1129 level=info msg="Starting TSDB ..." 23:16:08 prometheus | ts=2024-06-06T23:13:23.594Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 23:16:08 prometheus | ts=2024-06-06T23:13:23.594Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 23:16:08 prometheus | ts=2024-06-06T23:13:23.595Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 23:16:08 prometheus | ts=2024-06-06T23:13:23.595Z caller=head.go:703 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.14µs 23:16:08 prometheus | ts=2024-06-06T23:13:23.595Z caller=head.go:711 level=info component=tsdb msg="Replaying WAL, this may take a while" 23:16:08 prometheus | ts=2024-06-06T23:13:23.597Z caller=head.go:783 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 23:16:08 prometheus | ts=2024-06-06T23:13:23.597Z caller=head.go:820 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=29.43µs wal_replay_duration=1.402213ms wbl_replay_duration=590ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=3.14µs total_replay_duration=1.461933ms 23:16:08 prometheus | ts=2024-06-06T23:13:23.601Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 23:16:08 prometheus | ts=2024-06-06T23:13:23.601Z caller=main.go:1153 level=info msg="TSDB started" 23:16:08 prometheus | ts=2024-06-06T23:13:23.601Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:16:08 prometheus | ts=2024-06-06T23:13:23.603Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.364682ms db_storage=2.6µs remote_storage=2.72µs web_handler=1.07µs query_engine=1.89µs scrape=325.733µs scrape_sd=124.291µs notify=53.19µs notify_sd=16.53µs rules=2.7µs tracing=5.34µs 23:16:08 prometheus | ts=2024-06-06T23:13:23.603Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 23:16:08 prometheus | ts=2024-06-06T23:13:23.603Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 23:16:08 =================================== 23:16:08 ======== Logs from simulator ======== 23:16:08 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:08 simulator | overriding logback.xml 23:16:08 simulator | 2024-06-06 23:13:27,046 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:08 simulator | 2024-06-06 23:13:27,109 INFO org.onap.policy.models.simulators starting 23:16:08 simulator | 2024-06-06 23:13:27,109 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:16:08 simulator | 2024-06-06 23:13:27,304 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:16:08 simulator | 2024-06-06 23:13:27,304 INFO org.onap.policy.models.simulators starting A&AI simulator 23:16:08 simulator | 2024-06-06 23:13:27,446 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:08 simulator | 2024-06-06 23:13:27,456 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:08 simulator | 2024-06-06 23:13:27,458 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:08 simulator | 2024-06-06 23:13:27,463 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 23:16:08 simulator | 2024-06-06 23:13:27,516 INFO Session workerName=node0 23:16:08 simulator | 2024-06-06 23:13:28,003 INFO Using GSON for REST calls 23:16:08 simulator | 2024-06-06 23:13:28,116 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} 23:16:08 simulator | 2024-06-06 23:13:28,126 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:16:08 simulator | 2024-06-06 23:13:28,133 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1575ms 23:16:08 simulator | 2024-06-06 23:13:28,133 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4325 ms. 23:16:08 simulator | 2024-06-06 23:13:28,141 INFO org.onap.policy.models.simulators starting SDNC simulator 23:16:08 simulator | 2024-06-06 23:13:28,143 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:08 simulator | 2024-06-06 23:13:28,144 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:08 simulator | 2024-06-06 23:13:28,144 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:08 simulator | 2024-06-06 23:13:28,144 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 23:16:08 simulator | 2024-06-06 23:13:28,146 INFO Session workerName=node0 23:16:08 simulator | 2024-06-06 23:13:28,274 INFO Using GSON for REST calls 23:16:08 simulator | 2024-06-06 23:13:28,284 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} 23:16:08 simulator | 2024-06-06 23:13:28,285 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:16:08 simulator | 2024-06-06 23:13:28,285 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1727ms 23:16:08 simulator | 2024-06-06 23:13:28,285 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4859 ms. 23:16:08 simulator | 2024-06-06 23:13:28,286 INFO org.onap.policy.models.simulators starting SO simulator 23:16:08 simulator | 2024-06-06 23:13:28,290 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:08 simulator | 2024-06-06 23:13:28,290 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:08 simulator | 2024-06-06 23:13:28,292 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:08 simulator | 2024-06-06 23:13:28,292 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 23:16:08 simulator | 2024-06-06 23:13:28,295 INFO Session workerName=node0 23:16:08 simulator | 2024-06-06 23:13:28,381 INFO Using GSON for REST calls 23:16:08 simulator | 2024-06-06 23:13:28,394 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} 23:16:08 simulator | 2024-06-06 23:13:28,397 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:16:08 simulator | 2024-06-06 23:13:28,397 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1839ms 23:16:08 simulator | 2024-06-06 23:13:28,397 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4895 ms. 23:16:08 simulator | 2024-06-06 23:13:28,398 INFO org.onap.policy.models.simulators starting VFC simulator 23:16:08 simulator | 2024-06-06 23:13:28,400 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:08 simulator | 2024-06-06 23:13:28,401 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:08 simulator | 2024-06-06 23:13:28,402 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:08 simulator | 2024-06-06 23:13:28,403 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 23:16:08 simulator | 2024-06-06 23:13:28,406 INFO Session workerName=node0 23:16:08 simulator | 2024-06-06 23:13:28,474 INFO Using GSON for REST calls 23:16:08 simulator | 2024-06-06 23:13:28,481 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} 23:16:08 simulator | 2024-06-06 23:13:28,483 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 23:16:08 simulator | 2024-06-06 23:13:28,483 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1925ms 23:16:08 simulator | 2024-06-06 23:13:28,483 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4919 ms. 23:16:08 simulator | 2024-06-06 23:13:28,484 INFO org.onap.policy.models.simulators started 23:16:08 =================================== 23:16:08 ======== Logs from zookeeper ======== 23:16:08 zookeeper | ===> User 23:16:08 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:08 zookeeper | ===> Configuring ... 23:16:08 zookeeper | ===> Running preflight checks ... 23:16:08 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 23:16:08 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 23:16:08 zookeeper | ===> Launching ... 23:16:08 zookeeper | ===> Launching zookeeper ... 23:16:08 zookeeper | [2024-06-06 23:13:26,175] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:08 zookeeper | [2024-06-06 23:13:26,181] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:08 zookeeper | [2024-06-06 23:13:26,181] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:08 zookeeper | [2024-06-06 23:13:26,181] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:08 zookeeper | [2024-06-06 23:13:26,181] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:08 zookeeper | [2024-06-06 23:13:26,183] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:08 zookeeper | [2024-06-06 23:13:26,183] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:08 zookeeper | [2024-06-06 23:13:26,183] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:08 zookeeper | [2024-06-06 23:13:26,183] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:16:08 zookeeper | [2024-06-06 23:13:26,184] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:16:08 zookeeper | [2024-06-06 23:13:26,184] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:08 zookeeper | [2024-06-06 23:13:26,185] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:08 zookeeper | [2024-06-06 23:13:26,185] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:08 zookeeper | [2024-06-06 23:13:26,185] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:08 zookeeper | [2024-06-06 23:13:26,185] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:08 zookeeper | [2024-06-06 23:13:26,185] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:16:08 zookeeper | [2024-06-06 23:13:26,195] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@77eca502 (org.apache.zookeeper.server.ServerMetrics) 23:16:08 zookeeper | [2024-06-06 23:13:26,198] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:08 zookeeper | [2024-06-06 23:13:26,198] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:08 zookeeper | [2024-06-06 23:13:26,200] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:08 zookeeper | [2024-06-06 23:13:26,209] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,209] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,209] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,209] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,209] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,209] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,209] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,209] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,209] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,209] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,210] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,210] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,210] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,210] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,210] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,210] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,211] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,214] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:16:08 zookeeper | [2024-06-06 23:13:26,215] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,215] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,216] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:08 zookeeper | [2024-06-06 23:13:26,216] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:08 zookeeper | [2024-06-06 23:13:26,217] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:08 zookeeper | [2024-06-06 23:13:26,217] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:08 zookeeper | [2024-06-06 23:13:26,217] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:08 zookeeper | [2024-06-06 23:13:26,217] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:08 zookeeper | [2024-06-06 23:13:26,217] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:08 zookeeper | [2024-06-06 23:13:26,217] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:08 zookeeper | [2024-06-06 23:13:26,220] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,221] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,221] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:16:08 zookeeper | [2024-06-06 23:13:26,221] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:16:08 zookeeper | [2024-06-06 23:13:26,222] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,242] INFO Logging initialized @558ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:16:08 zookeeper | [2024-06-06 23:13:26,325] WARN o.e.j.s.ServletContextHandler@6d5620ce{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:16:08 zookeeper | [2024-06-06 23:13:26,326] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:16:08 zookeeper | [2024-06-06 23:13:26,342] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) 23:16:08 zookeeper | [2024-06-06 23:13:26,371] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:16:08 zookeeper | [2024-06-06 23:13:26,371] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:16:08 zookeeper | [2024-06-06 23:13:26,373] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 23:16:08 zookeeper | [2024-06-06 23:13:26,376] WARN ServletContext@o.e.j.s.ServletContextHandler@6d5620ce{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:16:08 zookeeper | [2024-06-06 23:13:26,383] INFO Started o.e.j.s.ServletContextHandler@6d5620ce{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:16:08 zookeeper | [2024-06-06 23:13:26,399] INFO Started ServerConnector@4d1bf319{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:16:08 zookeeper | [2024-06-06 23:13:26,400] INFO Started @716ms (org.eclipse.jetty.server.Server) 23:16:08 zookeeper | [2024-06-06 23:13:26,400] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,407] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:08 zookeeper | [2024-06-06 23:13:26,408] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:08 zookeeper | [2024-06-06 23:13:26,409] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:08 zookeeper | [2024-06-06 23:13:26,410] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:08 zookeeper | [2024-06-06 23:13:26,429] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:08 zookeeper | [2024-06-06 23:13:26,429] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:08 zookeeper | [2024-06-06 23:13:26,430] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:16:08 zookeeper | [2024-06-06 23:13:26,430] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:16:08 zookeeper | [2024-06-06 23:13:26,434] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:16:08 zookeeper | [2024-06-06 23:13:26,435] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:08 zookeeper | [2024-06-06 23:13:26,437] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:16:08 zookeeper | [2024-06-06 23:13:26,438] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:08 zookeeper | [2024-06-06 23:13:26,438] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:08 zookeeper | [2024-06-06 23:13:26,446] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:16:08 zookeeper | [2024-06-06 23:13:26,448] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:16:08 zookeeper | [2024-06-06 23:13:26,457] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:16:08 zookeeper | [2024-06-06 23:13:26,458] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:16:08 zookeeper | [2024-06-06 23:13:27,360] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:16:08 =================================== 23:16:08 Tearing down containers... 23:16:08 Container policy-csit Stopping 23:16:08 Container grafana Stopping 23:16:08 Container policy-apex-pdp Stopping 23:16:08 Container policy-csit Stopped 23:16:08 Container policy-csit Removing 23:16:08 Container policy-csit Removed 23:16:09 Container grafana Stopped 23:16:09 Container grafana Removing 23:16:09 Container grafana Removed 23:16:09 Container prometheus Stopping 23:16:09 Container prometheus Stopped 23:16:09 Container prometheus Removing 23:16:09 Container prometheus Removed 23:16:19 Container policy-apex-pdp Stopped 23:16:19 Container policy-apex-pdp Removing 23:16:19 Container policy-apex-pdp Removed 23:16:19 Container policy-pap Stopping 23:16:19 Container simulator Stopping 23:16:29 Container simulator Stopped 23:16:29 Container simulator Removing 23:16:29 Container simulator Removed 23:16:29 Container policy-pap Stopped 23:16:29 Container policy-pap Removing 23:16:29 Container policy-pap Removed 23:16:29 Container policy-api Stopping 23:16:29 Container kafka Stopping 23:16:30 Container kafka Stopped 23:16:30 Container kafka Removing 23:16:30 Container kafka Removed 23:16:30 Container zookeeper Stopping 23:16:31 Container zookeeper Stopped 23:16:31 Container zookeeper Removing 23:16:31 Container zookeeper Removed 23:16:39 Container policy-api Stopped 23:16:39 Container policy-api Removing 23:16:39 Container policy-api Removed 23:16:39 Container policy-db-migrator Stopping 23:16:39 Container policy-db-migrator Stopped 23:16:39 Container policy-db-migrator Removing 23:16:40 Container policy-db-migrator Removed 23:16:40 Container mariadb Stopping 23:16:40 Container mariadb Stopped 23:16:40 Container mariadb Removing 23:16:40 Container mariadb Removed 23:16:40 Network compose_default Removing 23:16:40 Network compose_default Removed 23:16:40 $ ssh-agent -k 23:16:40 unset SSH_AUTH_SOCK; 23:16:40 unset SSH_AGENT_PID; 23:16:40 echo Agent pid 2055 killed; 23:16:40 [ssh-agent] Stopped. 23:16:40 Robot results publisher started... 23:16:40 INFO: Checking test criticality is deprecated and will be dropped in a future release! 23:16:40 -Parsing output xml: 23:16:41 Done! 23:16:41 -Copying log files to build dir: 23:16:41 Done! 23:16:41 -Assigning results to build: 23:16:41 Done! 23:16:41 -Checking thresholds: 23:16:41 Done! 23:16:41 Done publishing Robot results. 23:16:41 [PostBuildScript] - [INFO] Executing post build scripts. 23:16:41 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6288311433428961269.sh 23:16:41 ---> sysstat.sh 23:16:42 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9993629827909645314.sh 23:16:42 ---> package-listing.sh 23:16:42 ++ facter osfamily 23:16:42 ++ tr '[:upper:]' '[:lower:]' 23:16:42 + OS_FAMILY=debian 23:16:42 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:16:42 + START_PACKAGES=/tmp/packages_start.txt 23:16:42 + END_PACKAGES=/tmp/packages_end.txt 23:16:42 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:16:42 + PACKAGES=/tmp/packages_start.txt 23:16:42 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:42 + PACKAGES=/tmp/packages_end.txt 23:16:42 + case "${OS_FAMILY}" in 23:16:42 + dpkg -l 23:16:42 + grep '^ii' 23:16:42 + '[' -f /tmp/packages_start.txt ']' 23:16:42 + '[' -f /tmp/packages_end.txt ']' 23:16:42 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:16:42 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:42 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:16:42 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:16:42 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2352415828474508659.sh 23:16:42 ---> capture-instance-metadata.sh 23:16:42 Setup pyenv: 23:16:42 system 23:16:42 3.8.13 23:16:42 3.9.13 23:16:42 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:16:42 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-5Ypn from file:/tmp/.os_lf_venv 23:16:43 lf-activate-venv(): INFO: Installing: lftools 23:16:52 lf-activate-venv(): INFO: Adding /tmp/venv-5Ypn/bin to PATH 23:16:52 INFO: Running in OpenStack, capturing instance metadata 23:16:53 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14230604748212341256.sh 23:16:53 provisioning config files... 23:16:53 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config4850900310910622657tmp 23:16:53 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:16:53 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:16:53 [EnvInject] - Injecting environment variables from a build step. 23:16:53 [EnvInject] - Injecting as environment variables the properties content 23:16:53 SERVER_ID=logs 23:16:53 23:16:53 [EnvInject] - Variables injected successfully. 23:16:53 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12325825505276820460.sh 23:16:53 ---> create-netrc.sh 23:16:53 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4259473980177176767.sh 23:16:53 ---> python-tools-install.sh 23:16:53 Setup pyenv: 23:16:53 system 23:16:53 3.8.13 23:16:53 3.9.13 23:16:53 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:16:53 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-5Ypn from file:/tmp/.os_lf_venv 23:16:55 lf-activate-venv(): INFO: Installing: lftools 23:17:02 lf-activate-venv(): INFO: Adding /tmp/venv-5Ypn/bin to PATH 23:17:02 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17998735114956505371.sh 23:17:02 ---> sudo-logs.sh 23:17:02 Archiving 'sudo' log.. 23:17:03 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12121310215263857243.sh 23:17:03 ---> job-cost.sh 23:17:03 Setup pyenv: 23:17:03 system 23:17:03 3.8.13 23:17:03 3.9.13 23:17:03 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:03 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-5Ypn from file:/tmp/.os_lf_venv 23:17:04 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:17:09 lf-activate-venv(): INFO: Adding /tmp/venv-5Ypn/bin to PATH 23:17:09 INFO: No Stack... 23:17:09 INFO: Retrieving Pricing Info for: v3-standard-8 23:17:09 INFO: Archiving Costs 23:17:09 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins675267859620674975.sh 23:17:09 ---> logs-deploy.sh 23:17:09 Setup pyenv: 23:17:09 system 23:17:09 3.8.13 23:17:09 3.9.13 23:17:09 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:10 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-5Ypn from file:/tmp/.os_lf_venv 23:17:11 lf-activate-venv(): INFO: Installing: lftools 23:17:19 lf-activate-venv(): INFO: Adding /tmp/venv-5Ypn/bin to PATH 23:17:19 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1725 23:17:19 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:17:20 Archives upload complete. 23:17:20 INFO: archiving logs to Nexus 23:17:21 ---> uname -a: 23:17:21 Linux prd-ubuntu1804-docker-8c-8g-15168 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:17:21 23:17:21 23:17:21 ---> lscpu: 23:17:21 Architecture: x86_64 23:17:21 CPU op-mode(s): 32-bit, 64-bit 23:17:21 Byte Order: Little Endian 23:17:21 CPU(s): 8 23:17:21 On-line CPU(s) list: 0-7 23:17:21 Thread(s) per core: 1 23:17:21 Core(s) per socket: 1 23:17:21 Socket(s): 8 23:17:21 NUMA node(s): 1 23:17:21 Vendor ID: AuthenticAMD 23:17:21 CPU family: 23 23:17:21 Model: 49 23:17:21 Model name: AMD EPYC-Rome Processor 23:17:21 Stepping: 0 23:17:21 CPU MHz: 2800.000 23:17:21 BogoMIPS: 5600.00 23:17:21 Virtualization: AMD-V 23:17:21 Hypervisor vendor: KVM 23:17:21 Virtualization type: full 23:17:21 L1d cache: 32K 23:17:21 L1i cache: 32K 23:17:21 L2 cache: 512K 23:17:21 L3 cache: 16384K 23:17:21 NUMA node0 CPU(s): 0-7 23:17:21 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:17:21 23:17:21 23:17:21 ---> nproc: 23:17:21 8 23:17:21 23:17:21 23:17:22 ---> df -h: 23:17:22 Filesystem Size Used Avail Use% Mounted on 23:17:22 udev 16G 0 16G 0% /dev 23:17:22 tmpfs 3.2G 708K 3.2G 1% /run 23:17:22 /dev/vda1 155G 14G 141G 9% / 23:17:22 tmpfs 16G 0 16G 0% /dev/shm 23:17:22 tmpfs 5.0M 0 5.0M 0% /run/lock 23:17:22 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:17:22 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:17:22 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:17:22 23:17:22 23:17:22 ---> free -m: 23:17:22 total used free shared buff/cache available 23:17:22 Mem: 32167 878 24744 0 6544 30833 23:17:22 Swap: 1023 0 1023 23:17:22 23:17:22 23:17:22 ---> ip addr: 23:17:22 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:17:22 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:17:22 inet 127.0.0.1/8 scope host lo 23:17:22 valid_lft forever preferred_lft forever 23:17:22 inet6 ::1/128 scope host 23:17:22 valid_lft forever preferred_lft forever 23:17:22 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:17:22 link/ether fa:16:3e:ac:a9:da brd ff:ff:ff:ff:ff:ff 23:17:22 inet 10.30.107.28/23 brd 10.30.107.255 scope global dynamic ens3 23:17:22 valid_lft 85984sec preferred_lft 85984sec 23:17:22 inet6 fe80::f816:3eff:feac:a9da/64 scope link 23:17:22 valid_lft forever preferred_lft forever 23:17:22 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:17:22 link/ether 02:42:3a:a3:f6:c4 brd ff:ff:ff:ff:ff:ff 23:17:22 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:17:22 valid_lft forever preferred_lft forever 23:17:22 inet6 fe80::42:3aff:fea3:f6c4/64 scope link 23:17:22 valid_lft forever preferred_lft forever 23:17:22 23:17:22 23:17:22 ---> sar -b -r -n DEV: 23:17:22 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-15168) 06/06/24 _x86_64_ (8 CPU) 23:17:22 23:17:22 23:10:29 LINUX RESTART (8 CPU) 23:17:22 23:17:22 23:11:01 tps rtps wtps bread/s bwrtn/s 23:17:22 23:12:01 292.47 35.33 257.14 1676.52 73782.77 23:17:22 23:13:01 265.41 18.78 246.63 2280.44 148267.24 23:17:22 23:14:01 379.74 11.86 367.87 788.74 69683.75 23:17:22 23:15:01 130.41 0.35 130.06 32.93 17484.09 23:17:22 23:16:01 5.52 0.02 5.50 3.60 116.88 23:17:22 23:17:01 70.15 1.30 68.86 99.45 2143.41 23:17:22 Average: 190.62 11.27 179.35 813.65 51915.70 23:17:22 23:17:22 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:17:22 23:12:01 30105624 31680104 2833596 8.60 69880 1815164 1421988 4.18 890380 1650948 154484 23:17:22 23:13:01 25622256 31621756 7316964 22.21 126064 6016424 1935560 5.69 1031592 5770644 1006500 23:17:22 23:14:01 23783968 29839044 9155252 27.79 142784 6035620 8456008 24.88 3012056 5554200 456 23:17:22 23:15:01 23156524 29535004 9782696 29.70 172516 6289104 9117648 26.83 3392280 5751044 1336 23:17:22 23:16:01 23201996 29581564 9737224 29.56 172756 6289832 9084496 26.73 3346564 5751772 216 23:17:22 23:17:01 25308600 31541556 7630620 23.17 174412 6160716 1705476 5.02 1436144 5622472 236 23:17:22 Average: 25196495 30633171 7742725 23.51 143069 5434477 5286863 15.56 2184836 5016847 193871 23:17:22 23:17:22 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:17:22 23:12:01 ens3 58.92 42.88 825.87 19.56 0.00 0.00 0.00 0.00 23:17:22 23:12:01 lo 1.53 1.53 0.18 0.18 0.00 0.00 0.00 0.00 23:17:22 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:22 23:13:01 ens3 1207.45 608.63 33091.53 52.38 0.00 0.00 0.00 0.00 23:17:22 23:13:01 lo 13.66 13.66 1.31 1.31 0.00 0.00 0.00 0.00 23:17:22 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:22 23:14:01 ens3 9.85 8.17 2.60 2.31 0.00 0.00 0.00 0.00 23:17:22 23:14:01 veth45a93bb 1.37 2.12 0.15 0.20 0.00 0.00 0.00 0.00 23:17:22 23:14:01 veth4e46181 2.82 2.72 0.29 0.28 0.00 0.00 0.00 0.00 23:17:22 23:14:01 vethb842eca 52.09 63.86 19.14 15.24 0.00 0.00 0.00 0.00 23:17:22 23:15:01 ens3 55.12 40.68 1183.84 5.22 0.00 0.00 0.00 0.00 23:17:22 23:15:01 veth45a93bb 3.47 4.25 0.62 0.70 0.00 0.00 0.00 0.00 23:17:22 23:15:01 veth4e46181 30.26 28.88 9.14 20.62 0.00 0.00 0.00 0.00 23:17:22 23:15:01 vethb842eca 33.23 41.11 41.60 11.85 0.00 0.00 0.00 0.00 23:17:22 23:16:01 ens3 1.27 1.20 0.30 0.43 0.00 0.00 0.00 0.00 23:17:22 23:16:01 veth45a93bb 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 23:17:22 23:16:01 veth4e46181 24.96 24.76 7.59 19.32 0.00 0.00 0.00 0.00 23:17:22 23:16:01 vethb842eca 14.61 17.90 18.52 5.79 0.00 0.00 0.00 0.00 23:17:22 23:17:01 ens3 51.81 41.88 76.14 28.55 0.00 0.00 0.00 0.00 23:17:22 23:17:01 lo 26.33 26.33 2.43 2.43 0.00 0.00 0.00 0.00 23:17:22 23:17:01 docker0 14.96 19.73 2.20 289.06 0.00 0.00 0.00 0.00 23:17:22 Average: ens3 230.76 123.92 5864.13 18.08 0.00 0.00 0.00 0.00 23:17:22 Average: lo 3.73 3.73 0.35 0.35 0.00 0.00 0.00 0.00 23:17:22 Average: docker0 2.49 3.29 0.37 48.17 0.00 0.00 0.00 0.00 23:17:22 23:17:22 23:17:22 ---> sar -P ALL: 23:17:22 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-15168) 06/06/24 _x86_64_ (8 CPU) 23:17:22 23:17:22 23:10:29 LINUX RESTART (8 CPU) 23:17:22 23:17:22 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:17:22 23:12:01 all 9.55 0.00 0.88 4.25 0.04 85.28 23:17:22 23:12:01 0 12.40 0.00 0.88 1.60 0.03 85.08 23:17:22 23:12:01 1 7.84 0.00 1.57 3.77 0.02 86.80 23:17:22 23:12:01 2 4.92 0.00 0.45 0.05 0.03 94.54 23:17:22 23:12:01 3 1.33 0.00 0.47 0.58 0.02 97.60 23:17:22 23:12:01 4 1.49 0.00 0.32 0.12 0.02 98.06 23:17:22 23:12:01 5 0.22 0.00 0.40 14.05 0.08 85.25 23:17:22 23:12:01 6 37.85 0.00 2.14 2.71 0.07 57.23 23:17:22 23:12:01 7 10.38 0.00 0.83 11.24 0.03 77.51 23:17:22 23:13:01 all 12.68 0.00 5.20 17.57 0.07 64.48 23:17:22 23:13:01 0 21.51 0.00 5.28 6.46 0.07 66.69 23:17:22 23:13:01 1 11.77 0.00 5.67 21.18 0.07 61.32 23:17:22 23:13:01 2 11.13 0.00 4.63 8.98 0.03 75.23 23:17:22 23:13:01 3 11.71 0.00 5.09 14.55 0.03 68.61 23:17:22 23:13:01 4 12.52 0.00 5.73 31.85 0.13 49.76 23:17:22 23:13:01 5 11.20 0.00 5.74 16.99 0.10 65.97 23:17:22 23:13:01 6 10.51 0.00 4.10 17.45 0.05 67.88 23:17:22 23:13:01 7 11.08 0.00 5.34 23.08 0.07 60.43 23:17:22 23:14:01 all 23.48 0.00 3.56 7.24 0.08 65.65 23:17:22 23:14:01 0 24.23 0.00 3.43 5.90 0.07 66.38 23:17:22 23:14:01 1 17.17 0.00 3.93 25.32 0.07 53.51 23:17:22 23:14:01 2 23.97 0.00 3.57 4.27 0.08 68.11 23:17:22 23:14:01 3 34.60 0.00 4.25 2.13 0.07 58.95 23:17:22 23:14:01 4 25.63 0.00 3.52 4.61 0.07 66.18 23:17:22 23:14:01 5 18.06 0.00 3.56 10.65 0.08 67.65 23:17:22 23:14:01 6 27.31 0.00 3.51 0.49 0.07 68.63 23:17:22 23:14:01 7 16.93 0.00 2.68 4.59 0.07 75.74 23:17:22 23:15:01 all 12.29 0.00 2.16 1.30 0.06 84.19 23:17:22 23:15:01 0 10.55 0.00 1.89 0.07 0.07 87.43 23:17:22 23:15:01 1 13.29 0.00 2.09 0.79 0.10 83.74 23:17:22 23:15:01 2 11.78 0.00 2.21 4.03 0.07 81.91 23:17:22 23:15:01 3 12.96 0.00 2.50 2.94 0.07 81.53 23:17:22 23:15:01 4 11.26 0.00 2.03 0.39 0.05 86.28 23:17:22 23:15:01 5 12.35 0.00 2.74 0.08 0.07 84.76 23:17:22 23:15:01 6 10.78 0.00 2.12 0.18 0.07 86.84 23:17:22 23:15:01 7 15.35 0.00 1.71 1.89 0.07 80.98 23:17:22 23:16:01 all 1.67 0.00 0.20 0.03 0.05 98.04 23:17:22 23:16:01 0 1.32 0.00 0.18 0.00 0.05 98.45 23:17:22 23:16:01 1 1.60 0.00 0.28 0.00 0.03 98.08 23:17:22 23:16:01 2 2.65 0.00 0.23 0.00 0.05 97.06 23:17:22 23:16:01 3 2.21 0.00 0.28 0.07 0.07 97.37 23:17:22 23:16:01 4 1.78 0.00 0.28 0.02 0.05 97.86 23:17:22 23:16:01 5 1.17 0.00 0.13 0.02 0.05 98.63 23:17:22 23:16:01 6 1.62 0.00 0.10 0.02 0.03 98.23 23:17:22 23:16:01 7 1.02 0.00 0.17 0.12 0.03 98.67 23:17:22 23:17:01 all 5.37 0.00 0.73 0.33 0.05 93.52 23:17:22 23:17:01 0 1.90 0.00 0.77 0.18 0.03 97.11 23:17:22 23:17:01 1 2.33 0.00 0.70 0.25 0.03 96.68 23:17:22 23:17:01 2 9.36 0.00 0.68 0.23 0.07 89.66 23:17:22 23:17:01 3 20.12 0.00 0.92 0.30 0.05 78.61 23:17:22 23:17:01 4 1.67 0.00 0.75 0.15 0.07 97.36 23:17:22 23:17:01 5 4.24 0.00 0.68 0.13 0.05 94.89 23:17:22 23:17:01 6 1.59 0.00 0.57 0.08 0.02 97.75 23:17:22 23:17:01 7 1.71 0.00 0.75 1.34 0.03 96.17 23:17:22 Average: all 10.82 0.00 2.12 5.10 0.06 81.90 23:17:22 Average: 0 11.96 0.00 2.06 2.36 0.05 83.57 23:17:22 Average: 1 8.98 0.00 2.37 8.52 0.05 80.08 23:17:22 Average: 2 10.61 0.00 1.95 2.91 0.06 84.47 23:17:22 Average: 3 13.81 0.00 2.24 3.41 0.05 80.49 23:17:22 Average: 4 9.04 0.00 2.10 6.15 0.06 82.64 23:17:22 Average: 5 7.85 0.00 2.20 6.96 0.07 82.91 23:17:22 Average: 6 14.93 0.00 2.09 3.47 0.05 79.46 23:17:22 Average: 7 9.40 0.00 1.91 7.02 0.05 81.62 23:17:22 23:17:22 23:17:22