17:00:00 Started by timer 17:00:00 Running as SYSTEM 17:00:00 [EnvInject] - Loading node environment variables. 17:00:00 Building remotely on prd-ubuntu1804-docker-8c-8g-43530 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-newdelhi-project-csit-pap 17:00:00 [ssh-agent] Looking for ssh-agent implementation... 17:00:00 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 17:00:00 $ ssh-agent 17:00:00 SSH_AUTH_SOCK=/tmp/ssh-gbIxw49FfNlF/agent.2065 17:00:00 SSH_AGENT_PID=2067 17:00:00 [ssh-agent] Started. 17:00:00 Running ssh-add (command line suppressed) 17:00:01 Identity added: /w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/private_key_3968627453665467779.key (/w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/private_key_3968627453665467779.key) 17:00:01 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 17:00:01 The recommended git tool is: NONE 17:00:02 using credential onap-jenkins-ssh 17:00:02 Wiping out workspace first. 17:00:02 Cloning the remote Git repository 17:00:02 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 17:00:02 > git init /w/workspace/policy-pap-newdelhi-project-csit-pap # timeout=10 17:00:02 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 17:00:02 > git --version # timeout=10 17:00:02 > git --version # 'git version 2.17.1' 17:00:02 using GIT_SSH to set credentials Gerrit user 17:00:02 Verifying host key using manually-configured host key entries 17:00:02 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 17:00:03 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 17:00:03 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 17:00:03 Avoid second fetch 17:00:03 > git rev-parse refs/remotes/origin/newdelhi^{commit} # timeout=10 17:00:03 Checking out Revision a0de87f9d2d88fd7f870703053c99c7149d608ec (refs/remotes/origin/newdelhi) 17:00:03 > git config core.sparsecheckout # timeout=10 17:00:03 > git checkout -f a0de87f9d2d88fd7f870703053c99c7149d608ec # timeout=30 17:00:03 Commit message: "Fix timeout in pap CSIT for auditing undeploys" 17:00:03 > git rev-list --no-walk a0de87f9d2d88fd7f870703053c99c7149d608ec # timeout=10 17:00:07 provisioning config files... 17:00:07 copy managed file [npmrc] to file:/home/jenkins/.npmrc 17:00:07 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 17:00:07 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins2617794102939173544.sh 17:00:07 ---> python-tools-install.sh 17:00:07 Setup pyenv: 17:00:07 * system (set by /opt/pyenv/version) 17:00:07 * 3.8.13 (set by /opt/pyenv/version) 17:00:07 * 3.9.13 (set by /opt/pyenv/version) 17:00:07 * 3.10.6 (set by /opt/pyenv/version) 17:00:11 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-pcnE 17:00:11 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 17:00:15 lf-activate-venv(): INFO: Installing: lftools 17:00:48 lf-activate-venv(): INFO: Adding /tmp/venv-pcnE/bin to PATH 17:00:48 Generating Requirements File 17:01:10 Python 3.10.6 17:01:11 pip 24.2 from /tmp/venv-pcnE/lib/python3.10/site-packages/pip (python 3.10) 17:01:11 appdirs==1.4.4 17:01:11 argcomplete==3.5.0 17:01:11 aspy.yaml==1.3.0 17:01:11 attrs==24.2.0 17:01:11 autopage==0.5.2 17:01:11 beautifulsoup4==4.12.3 17:01:11 boto3==1.35.29 17:01:11 botocore==1.35.29 17:01:11 bs4==0.0.2 17:01:11 cachetools==5.5.0 17:01:11 certifi==2024.8.30 17:01:11 cffi==1.17.1 17:01:11 cfgv==3.4.0 17:01:11 chardet==5.2.0 17:01:11 charset-normalizer==3.3.2 17:01:11 click==8.1.7 17:01:11 cliff==4.7.0 17:01:11 cmd2==2.4.3 17:01:11 cryptography==3.3.2 17:01:11 debtcollector==3.0.0 17:01:11 decorator==5.1.1 17:01:11 defusedxml==0.7.1 17:01:11 Deprecated==1.2.14 17:01:11 distlib==0.3.8 17:01:11 dnspython==2.6.1 17:01:11 docker==4.2.2 17:01:11 dogpile.cache==1.3.3 17:01:11 durationpy==0.7 17:01:11 email_validator==2.2.0 17:01:11 filelock==3.16.1 17:01:11 future==1.0.0 17:01:11 gitdb==4.0.11 17:01:11 GitPython==3.1.43 17:01:11 google-auth==2.35.0 17:01:11 httplib2==0.22.0 17:01:11 identify==2.6.1 17:01:11 idna==3.10 17:01:11 importlib-resources==1.5.0 17:01:11 iso8601==2.1.0 17:01:11 Jinja2==3.1.4 17:01:11 jmespath==1.0.1 17:01:11 jsonpatch==1.33 17:01:11 jsonpointer==3.0.0 17:01:11 jsonschema==4.23.0 17:01:11 jsonschema-specifications==2023.12.1 17:01:11 keystoneauth1==5.8.0 17:01:11 kubernetes==31.0.0 17:01:11 lftools==0.37.10 17:01:11 lxml==5.3.0 17:01:11 MarkupSafe==2.1.5 17:01:11 msgpack==1.1.0 17:01:11 multi_key_dict==2.0.3 17:01:11 munch==4.0.0 17:01:11 netaddr==1.3.0 17:01:11 netifaces==0.11.0 17:01:11 niet==1.4.2 17:01:11 nodeenv==1.9.1 17:01:11 oauth2client==4.1.3 17:01:11 oauthlib==3.2.2 17:01:11 openstacksdk==4.0.0 17:01:11 os-client-config==2.1.0 17:01:11 os-service-types==1.7.0 17:01:11 osc-lib==3.1.0 17:01:11 oslo.config==9.6.0 17:01:11 oslo.context==5.6.0 17:01:11 oslo.i18n==6.4.0 17:01:11 oslo.log==6.1.2 17:01:11 oslo.serialization==5.5.0 17:01:11 oslo.utils==7.3.0 17:01:11 packaging==24.1 17:01:11 pbr==6.1.0 17:01:11 platformdirs==4.3.6 17:01:11 prettytable==3.11.0 17:01:11 pyasn1==0.6.1 17:01:11 pyasn1_modules==0.4.1 17:01:11 pycparser==2.22 17:01:11 pygerrit2==2.0.15 17:01:11 PyGithub==2.4.0 17:01:11 PyJWT==2.9.0 17:01:11 PyNaCl==1.5.0 17:01:11 pyparsing==2.4.7 17:01:11 pyperclip==1.9.0 17:01:11 pyrsistent==0.20.0 17:01:11 python-cinderclient==9.6.0 17:01:11 python-dateutil==2.9.0.post0 17:01:11 python-heatclient==4.0.0 17:01:11 python-jenkins==1.8.2 17:01:11 python-keystoneclient==5.5.0 17:01:11 python-magnumclient==4.7.0 17:01:11 python-openstackclient==7.1.2 17:01:11 python-swiftclient==4.6.0 17:01:11 PyYAML==6.0.2 17:01:11 referencing==0.35.1 17:01:11 requests==2.32.3 17:01:11 requests-oauthlib==2.0.0 17:01:11 requestsexceptions==1.4.0 17:01:11 rfc3986==2.0.0 17:01:11 rpds-py==0.20.0 17:01:11 rsa==4.9 17:01:11 ruamel.yaml==0.18.6 17:01:11 ruamel.yaml.clib==0.2.8 17:01:11 s3transfer==0.10.2 17:01:11 simplejson==3.19.3 17:01:11 six==1.16.0 17:01:11 smmap==5.0.1 17:01:11 soupsieve==2.6 17:01:11 stevedore==5.3.0 17:01:11 tabulate==0.9.0 17:01:11 toml==0.10.2 17:01:11 tomlkit==0.13.2 17:01:11 tqdm==4.66.5 17:01:11 typing_extensions==4.12.2 17:01:11 tzdata==2024.2 17:01:11 urllib3==1.26.20 17:01:11 virtualenv==20.26.6 17:01:11 wcwidth==0.2.13 17:01:11 websocket-client==1.8.0 17:01:11 wrapt==1.16.0 17:01:11 xdg==6.0.0 17:01:11 xmltodict==0.13.0 17:01:11 yq==3.4.3 17:01:11 [EnvInject] - Injecting environment variables from a build step. 17:01:11 [EnvInject] - Injecting as environment variables the properties content 17:01:11 SET_JDK_VERSION=openjdk17 17:01:11 GIT_URL="git://cloud.onap.org/mirror" 17:01:11 17:01:11 [EnvInject] - Variables injected successfully. 17:01:11 [policy-pap-newdelhi-project-csit-pap] $ /bin/sh /tmp/jenkins10413193996400815104.sh 17:01:11 ---> update-java-alternatives.sh 17:01:11 ---> Updating Java version 17:01:11 ---> Ubuntu/Debian system detected 17:01:11 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 17:01:11 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 17:01:11 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 17:01:12 openjdk version "17.0.4" 2022-07-19 17:01:12 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 17:01:12 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 17:01:12 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 17:01:12 [EnvInject] - Injecting environment variables from a build step. 17:01:12 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 17:01:12 [EnvInject] - Variables injected successfully. 17:01:12 [policy-pap-newdelhi-project-csit-pap] $ /bin/sh -xe /tmp/jenkins11651902676722207688.sh 17:01:12 + /w/workspace/policy-pap-newdelhi-project-csit-pap/csit/run-project-csit.sh pap 17:01:12 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 17:01:12 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 17:01:12 Configure a credential helper to remove this warning. See 17:01:12 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 17:01:12 17:01:12 Login Succeeded 17:01:12 docker: 'compose' is not a docker command. 17:01:12 See 'docker --help' 17:01:12 Docker Compose Plugin not installed. Installing now... 17:01:12 % Total % Received % Xferd Average Speed Time Time Time Current 17:01:12 Dload Upload Total Spent Left Speed 17:01:12 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 17:01:13 60 60.0M 60 36.2M 0 0 76.7M 0 --:--:-- --:--:-- --:--:-- 76.7M 100 60.0M 100 60.0M 0 0 98.0M 0 --:--:-- --:--:-- --:--:-- 171M 17:01:13 Setting project configuration for: pap 17:01:13 Configuring docker compose... 17:01:15 Starting apex-pdp application with Grafana 17:01:15 grafana Pulling 17:01:15 policy-db-migrator Pulling 17:01:15 api Pulling 17:01:15 prometheus Pulling 17:01:15 mariadb Pulling 17:01:15 simulator Pulling 17:01:15 zookeeper Pulling 17:01:15 apex-pdp Pulling 17:01:15 pap Pulling 17:01:15 kafka Pulling 17:01:15 31e352740f53 Pulling fs layer 17:01:15 ad1782e4d1ef Pulling fs layer 17:01:15 bc8105c6553b Pulling fs layer 17:01:15 929241f867bb Pulling fs layer 17:01:15 37728a7352e6 Pulling fs layer 17:01:15 3f40c7aa46a6 Pulling fs layer 17:01:15 353af139d39e Pulling fs layer 17:01:15 3f40c7aa46a6 Waiting 17:01:15 929241f867bb Waiting 17:01:15 37728a7352e6 Waiting 17:01:15 353af139d39e Waiting 17:01:15 31e352740f53 Pulling fs layer 17:01:15 ecc4de98d537 Pulling fs layer 17:01:15 665dfb3388a1 Pulling fs layer 17:01:15 f270a5fd7930 Pulling fs layer 17:01:15 9038eaba24f8 Pulling fs layer 17:01:15 ecc4de98d537 Waiting 17:01:15 04a7796b82ca Pulling fs layer 17:01:15 665dfb3388a1 Waiting 17:01:15 9038eaba24f8 Waiting 17:01:15 04a7796b82ca Waiting 17:01:15 f270a5fd7930 Waiting 17:01:15 31e352740f53 Downloading [> ] 48.06kB/3.398MB 17:01:15 31e352740f53 Downloading [> ] 48.06kB/3.398MB 17:01:15 31e352740f53 Pulling fs layer 17:01:15 ecc4de98d537 Pulling fs layer 17:01:15 bda0b253c68f Pulling fs layer 17:01:15 b9357b55a7a5 Pulling fs layer 17:01:15 4c3047628e17 Pulling fs layer 17:01:15 6cf350721225 Pulling fs layer 17:01:15 de723b4c7ed9 Pulling fs layer 17:01:15 31e352740f53 Downloading [> ] 48.06kB/3.398MB 17:01:15 ecc4de98d537 Waiting 17:01:15 bda0b253c68f Waiting 17:01:15 b9357b55a7a5 Waiting 17:01:15 4c3047628e17 Waiting 17:01:15 6cf350721225 Waiting 17:01:15 de723b4c7ed9 Waiting 17:01:15 bc8105c6553b Downloading [=> ] 3.002kB/84.13kB 17:01:15 bc8105c6553b Downloading [==================================================>] 84.13kB/84.13kB 17:01:15 bc8105c6553b Verifying Checksum 17:01:15 bc8105c6553b Download complete 17:01:15 31e352740f53 Pulling fs layer 17:01:15 ecc4de98d537 Pulling fs layer 17:01:15 1fe734c5fee3 Pulling fs layer 17:01:15 c8e6f0452a8e Pulling fs layer 17:01:15 0143f8517101 Pulling fs layer 17:01:15 ee69cc1a77e2 Pulling fs layer 17:01:15 81667b400b57 Pulling fs layer 17:01:15 ec3b6d0cc414 Pulling fs layer 17:01:15 ecc4de98d537 Waiting 17:01:15 31e352740f53 Downloading [> ] 48.06kB/3.398MB 17:01:15 a8d3998ab21c Pulling fs layer 17:01:15 89d6e2ec6372 Pulling fs layer 17:01:15 80096f8bb25e Pulling fs layer 17:01:15 cbd359ebc87d Pulling fs layer 17:01:15 c8e6f0452a8e Waiting 17:01:15 0143f8517101 Waiting 17:01:15 ee69cc1a77e2 Waiting 17:01:15 81667b400b57 Waiting 17:01:15 ec3b6d0cc414 Waiting 17:01:15 80096f8bb25e Waiting 17:01:15 cbd359ebc87d Waiting 17:01:15 89d6e2ec6372 Waiting 17:01:15 a8d3998ab21c Waiting 17:01:15 1fe734c5fee3 Waiting 17:01:15 ad1782e4d1ef Downloading [> ] 539.6kB/180.4MB 17:01:15 929241f867bb Downloading [==================================================>] 92B/92B 17:01:15 929241f867bb Verifying Checksum 17:01:15 929241f867bb Download complete 17:01:15 31e352740f53 Pulling fs layer 17:01:15 ecc4de98d537 Pulling fs layer 17:01:15 145e9fcd3938 Pulling fs layer 17:01:15 4be774fd73e2 Pulling fs layer 17:01:15 71f834c33815 Pulling fs layer 17:01:15 a40760cd2625 Pulling fs layer 17:01:15 114f99593bd8 Pulling fs layer 17:01:15 4be774fd73e2 Waiting 17:01:15 71f834c33815 Waiting 17:01:15 a40760cd2625 Waiting 17:01:15 114f99593bd8 Waiting 17:01:15 31e352740f53 Downloading [> ] 48.06kB/3.398MB 17:01:15 ecc4de98d537 Waiting 17:01:15 145e9fcd3938 Waiting 17:01:15 37728a7352e6 Downloading [==================================================>] 92B/92B 17:01:15 37728a7352e6 Verifying Checksum 17:01:15 37728a7352e6 Download complete 17:01:15 4abcf2066143 Pulling fs layer 17:01:15 39aee5fd3406 Pulling fs layer 17:01:15 592f1e71407c Pulling fs layer 17:01:15 66aec874ce0c Pulling fs layer 17:01:15 bde37282dfba Pulling fs layer 17:01:15 b6982d0733af Pulling fs layer 17:01:15 4abcf2066143 Waiting 17:01:15 592f1e71407c Waiting 17:01:15 39aee5fd3406 Waiting 17:01:15 66aec874ce0c Waiting 17:01:15 bde37282dfba Waiting 17:01:15 ab3c28da242b Pulling fs layer 17:01:15 e4892977d944 Pulling fs layer 17:01:15 ef2b3f3f597e Pulling fs layer 17:01:15 27a3c8ebdfbf Pulling fs layer 17:01:15 b6982d0733af Waiting 17:01:15 ab3c28da242b Waiting 17:01:15 e4892977d944 Waiting 17:01:15 27a3c8ebdfbf Waiting 17:01:15 3f40c7aa46a6 Downloading [==================================================>] 302B/302B 17:01:15 3f40c7aa46a6 Verifying Checksum 17:01:15 3f40c7aa46a6 Download complete 17:01:15 31e352740f53 Verifying Checksum 17:01:15 31e352740f53 Download complete 17:01:15 31e352740f53 Download complete 17:01:15 31e352740f53 Download complete 17:01:15 31e352740f53 Download complete 17:01:15 31e352740f53 Download complete 17:01:15 31e352740f53 Extracting [> ] 65.54kB/3.398MB 17:01:15 31e352740f53 Extracting [> ] 65.54kB/3.398MB 17:01:15 31e352740f53 Extracting [> ] 65.54kB/3.398MB 17:01:15 31e352740f53 Extracting [> ] 65.54kB/3.398MB 17:01:15 31e352740f53 Extracting [> ] 65.54kB/3.398MB 17:01:15 353af139d39e Downloading [> ] 539.6kB/246.5MB 17:01:15 9fa9226be034 Pulling fs layer 17:01:15 1617e25568b2 Pulling fs layer 17:01:15 02203e3d6934 Pulling fs layer 17:01:15 8be4b7271108 Pulling fs layer 17:01:15 8becc689631f Pulling fs layer 17:01:15 ceaeea15c1bf Pulling fs layer 17:01:15 564720d6ed13 Pulling fs layer 17:01:15 1fd5d47e09da Pulling fs layer 17:01:15 1afe4a0d7329 Pulling fs layer 17:01:15 bd55ccfa5aad Pulling fs layer 17:01:15 54f884861fc1 Pulling fs layer 17:01:15 b09316e948c6 Pulling fs layer 17:01:15 8be4b7271108 Waiting 17:01:15 8becc689631f Waiting 17:01:15 ceaeea15c1bf Waiting 17:01:15 564720d6ed13 Waiting 17:01:15 1fd5d47e09da Waiting 17:01:15 1afe4a0d7329 Waiting 17:01:15 bd55ccfa5aad Waiting 17:01:15 54f884861fc1 Waiting 17:01:15 b09316e948c6 Waiting 17:01:15 9fa9226be034 Waiting 17:01:15 1617e25568b2 Waiting 17:01:15 02203e3d6934 Waiting 17:01:15 10ac4908093d Pulling fs layer 17:01:15 44779101e748 Pulling fs layer 17:01:15 a721db3e3f3d Pulling fs layer 17:01:15 1850a929b84a Pulling fs layer 17:01:15 397a918c7da3 Pulling fs layer 17:01:15 806be17e856d Pulling fs layer 17:01:15 634de6c90876 Pulling fs layer 17:01:15 cd00854cfb1a Pulling fs layer 17:01:15 397a918c7da3 Waiting 17:01:15 806be17e856d Waiting 17:01:15 10ac4908093d Waiting 17:01:15 634de6c90876 Waiting 17:01:15 cd00854cfb1a Waiting 17:01:15 44779101e748 Waiting 17:01:15 a721db3e3f3d Waiting 17:01:15 1850a929b84a Waiting 17:01:15 ecc4de98d537 Downloading [> ] 539.6kB/73.93MB 17:01:15 ecc4de98d537 Downloading [> ] 539.6kB/73.93MB 17:01:15 ecc4de98d537 Downloading [> ] 539.6kB/73.93MB 17:01:15 ecc4de98d537 Downloading [> ] 539.6kB/73.93MB 17:01:15 ad1782e4d1ef Downloading [==> ] 9.731MB/180.4MB 17:01:16 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 17:01:16 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 17:01:16 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 17:01:16 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 17:01:16 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 17:01:16 353af139d39e Downloading [=> ] 9.731MB/246.5MB 17:01:16 ecc4de98d537 Downloading [=====> ] 7.568MB/73.93MB 17:01:16 ecc4de98d537 Downloading [=====> ] 7.568MB/73.93MB 17:01:16 ecc4de98d537 Downloading [=====> ] 7.568MB/73.93MB 17:01:16 ecc4de98d537 Downloading [=====> ] 7.568MB/73.93MB 17:01:16 ad1782e4d1ef Downloading [======> ] 23.79MB/180.4MB 17:01:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:01:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:01:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:01:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:01:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:01:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:01:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:01:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:01:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:01:16 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:01:16 353af139d39e Downloading [====> ] 22.17MB/246.5MB 17:01:16 ecc4de98d537 Downloading [============> ] 18.92MB/73.93MB 17:01:16 ecc4de98d537 Downloading [============> ] 18.92MB/73.93MB 17:01:16 ecc4de98d537 Downloading [============> ] 18.92MB/73.93MB 17:01:16 ecc4de98d537 Downloading [============> ] 18.92MB/73.93MB 17:01:16 31e352740f53 Pull complete 17:01:16 31e352740f53 Pull complete 17:01:16 31e352740f53 Pull complete 17:01:16 31e352740f53 Pull complete 17:01:16 31e352740f53 Pull complete 17:01:16 ad1782e4d1ef Downloading [==========> ] 38.39MB/180.4MB 17:01:16 56f27190e824 Pulling fs layer 17:01:16 8e70b9b9b078 Pulling fs layer 17:01:16 732c9ebb730c Pulling fs layer 17:01:16 ed746366f1b8 Pulling fs layer 17:01:16 10894799ccd9 Pulling fs layer 17:01:16 8d377259558c Pulling fs layer 17:01:16 e7688095d1e6 Pulling fs layer 17:01:16 56f27190e824 Waiting 17:01:16 8eab815b3593 Pulling fs layer 17:01:16 00ded6dd259e Pulling fs layer 17:01:16 296f622c8150 Pulling fs layer 17:01:16 4ee3050cff6b Pulling fs layer 17:01:16 519f42193ec8 Pulling fs layer 17:01:16 5df3538dc51e Pulling fs layer 17:01:16 8e70b9b9b078 Waiting 17:01:16 ed746366f1b8 Waiting 17:01:16 00ded6dd259e Waiting 17:01:16 296f622c8150 Waiting 17:01:16 10894799ccd9 Waiting 17:01:16 4ee3050cff6b Waiting 17:01:16 8d377259558c Waiting 17:01:16 519f42193ec8 Waiting 17:01:16 5df3538dc51e Waiting 17:01:16 8eab815b3593 Waiting 17:01:16 e7688095d1e6 Waiting 17:01:16 732c9ebb730c Waiting 17:01:16 56f27190e824 Pulling fs layer 17:01:16 8e70b9b9b078 Pulling fs layer 17:01:16 732c9ebb730c Pulling fs layer 17:01:16 ed746366f1b8 Pulling fs layer 17:01:16 10894799ccd9 Pulling fs layer 17:01:16 8d377259558c Pulling fs layer 17:01:16 e7688095d1e6 Pulling fs layer 17:01:16 8e70b9b9b078 Waiting 17:01:16 8eab815b3593 Pulling fs layer 17:01:16 ed746366f1b8 Waiting 17:01:16 00ded6dd259e Pulling fs layer 17:01:16 732c9ebb730c Waiting 17:01:16 8d377259558c Waiting 17:01:16 10894799ccd9 Waiting 17:01:16 296f622c8150 Pulling fs layer 17:01:16 56f27190e824 Waiting 17:01:16 4ee3050cff6b Pulling fs layer 17:01:16 8eab815b3593 Waiting 17:01:16 98acab318002 Pulling fs layer 17:01:16 e7688095d1e6 Waiting 17:01:16 00ded6dd259e Waiting 17:01:16 878348106a95 Pulling fs layer 17:01:16 98acab318002 Waiting 17:01:16 4ee3050cff6b Waiting 17:01:16 296f622c8150 Waiting 17:01:16 353af139d39e Downloading [======> ] 33.52MB/246.5MB 17:01:16 ecc4de98d537 Downloading [=====================> ] 31.9MB/73.93MB 17:01:16 ecc4de98d537 Downloading [=====================> ] 31.9MB/73.93MB 17:01:16 ecc4de98d537 Downloading [=====================> ] 31.9MB/73.93MB 17:01:16 ecc4de98d537 Downloading [=====================> ] 31.9MB/73.93MB 17:01:16 ad1782e4d1ef Downloading [=============> ] 49.74MB/180.4MB 17:01:16 353af139d39e Downloading [=========> ] 45.42MB/246.5MB 17:01:16 ecc4de98d537 Downloading [=============================> ] 44.33MB/73.93MB 17:01:16 ecc4de98d537 Downloading [=============================> ] 44.33MB/73.93MB 17:01:16 ecc4de98d537 Downloading [=============================> ] 44.33MB/73.93MB 17:01:16 ecc4de98d537 Downloading [=============================> ] 44.33MB/73.93MB 17:01:16 ad1782e4d1ef Downloading [================> ] 60.55MB/180.4MB 17:01:16 353af139d39e Downloading [===========> ] 57.31MB/246.5MB 17:01:16 ecc4de98d537 Downloading [======================================> ] 56.23MB/73.93MB 17:01:16 ecc4de98d537 Downloading [======================================> ] 56.23MB/73.93MB 17:01:16 ecc4de98d537 Downloading [======================================> ] 56.23MB/73.93MB 17:01:16 ecc4de98d537 Downloading [======================================> ] 56.23MB/73.93MB 17:01:16 ad1782e4d1ef Downloading [====================> ] 74.61MB/180.4MB 17:01:16 353af139d39e Downloading [==============> ] 70.83MB/246.5MB 17:01:16 ecc4de98d537 Downloading [===============================================> ] 70.29MB/73.93MB 17:01:16 ecc4de98d537 Downloading [===============================================> ] 70.29MB/73.93MB 17:01:16 ecc4de98d537 Downloading [===============================================> ] 70.29MB/73.93MB 17:01:16 ecc4de98d537 Downloading [===============================================> ] 70.29MB/73.93MB 17:01:16 ecc4de98d537 Verifying Checksum 17:01:16 ecc4de98d537 Verifying Checksum 17:01:16 ecc4de98d537 Download complete 17:01:16 ecc4de98d537 Verifying Checksum 17:01:16 ecc4de98d537 Download complete 17:01:16 ecc4de98d537 Download complete 17:01:16 ecc4de98d537 Verifying Checksum 17:01:16 ecc4de98d537 Download complete 17:01:16 ad1782e4d1ef Downloading [========================> ] 89.75MB/180.4MB 17:01:16 665dfb3388a1 Downloading [==================================================>] 303B/303B 17:01:16 353af139d39e Downloading [=================> ] 84.34MB/246.5MB 17:01:16 f270a5fd7930 Downloading [> ] 539.6kB/159.1MB 17:01:16 ecc4de98d537 Extracting [> ] 557.1kB/73.93MB 17:01:16 ecc4de98d537 Extracting [> ] 557.1kB/73.93MB 17:01:16 ecc4de98d537 Extracting [> ] 557.1kB/73.93MB 17:01:16 ecc4de98d537 Extracting [> ] 557.1kB/73.93MB 17:01:16 ad1782e4d1ef Downloading [=============================> ] 106MB/180.4MB 17:01:16 353af139d39e Downloading [====================> ] 100MB/246.5MB 17:01:16 f270a5fd7930 Downloading [=> ] 3.784MB/159.1MB 17:01:16 ad1782e4d1ef Downloading [=================================> ] 120MB/180.4MB 17:01:16 ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB 17:01:16 ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB 17:01:16 ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB 17:01:16 ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB 17:01:16 353af139d39e Downloading [=======================> ] 113.5MB/246.5MB 17:01:16 f270a5fd7930 Downloading [==> ] 7.028MB/159.1MB 17:01:16 ad1782e4d1ef Downloading [====================================> ] 132.5MB/180.4MB 17:01:16 ecc4de98d537 Extracting [=======> ] 11.14MB/73.93MB 17:01:16 ecc4de98d537 Extracting [=======> ] 11.14MB/73.93MB 17:01:16 ecc4de98d537 Extracting [=======> ] 11.14MB/73.93MB 17:01:16 ecc4de98d537 Extracting [=======> ] 11.14MB/73.93MB 17:01:16 353af139d39e Downloading [=========================> ] 127.1MB/246.5MB 17:01:16 f270a5fd7930 Downloading [===> ] 10.81MB/159.1MB 17:01:17 ad1782e4d1ef Downloading [=======================================> ] 143.8MB/180.4MB 17:01:17 ecc4de98d537 Extracting [===========> ] 17.27MB/73.93MB 17:01:17 ecc4de98d537 Extracting [===========> ] 17.27MB/73.93MB 17:01:17 ecc4de98d537 Extracting [===========> ] 17.27MB/73.93MB 17:01:17 ecc4de98d537 Extracting [===========> ] 17.27MB/73.93MB 17:01:17 353af139d39e Downloading [============================> ] 141.7MB/246.5MB 17:01:17 ad1782e4d1ef Downloading [===========================================> ] 156.3MB/180.4MB 17:01:17 f270a5fd7930 Downloading [====> ] 15.14MB/159.1MB 17:01:17 ecc4de98d537 Extracting [================> ] 24.51MB/73.93MB 17:01:17 ecc4de98d537 Extracting [================> ] 24.51MB/73.93MB 17:01:17 ecc4de98d537 Extracting [================> ] 24.51MB/73.93MB 17:01:17 ecc4de98d537 Extracting [================> ] 24.51MB/73.93MB 17:01:17 353af139d39e Downloading [===============================> ] 156.3MB/246.5MB 17:01:17 ad1782e4d1ef Downloading [==============================================> ] 167.1MB/180.4MB 17:01:17 f270a5fd7930 Downloading [======> ] 19.46MB/159.1MB 17:01:17 ecc4de98d537 Extracting [====================> ] 30.64MB/73.93MB 17:01:17 ecc4de98d537 Extracting [====================> ] 30.64MB/73.93MB 17:01:17 ecc4de98d537 Extracting [====================> ] 30.64MB/73.93MB 17:01:17 ecc4de98d537 Extracting [====================> ] 30.64MB/73.93MB 17:01:17 353af139d39e Downloading [==================================> ] 169.2MB/246.5MB 17:01:17 ad1782e4d1ef Downloading [=================================================> ] 178.4MB/180.4MB 17:01:17 f270a5fd7930 Downloading [=======> ] 23.79MB/159.1MB 17:01:17 ad1782e4d1ef Verifying Checksum 17:01:17 ad1782e4d1ef Download complete 17:01:17 ecc4de98d537 Extracting [========================> ] 35.65MB/73.93MB 17:01:17 ecc4de98d537 Extracting [========================> ] 35.65MB/73.93MB 17:01:17 ecc4de98d537 Extracting [========================> ] 35.65MB/73.93MB 17:01:17 ecc4de98d537 Extracting [========================> ] 35.65MB/73.93MB 17:01:17 353af139d39e Downloading [====================================> ] 182.2MB/246.5MB 17:01:17 9038eaba24f8 Downloading [==================================================>] 1.153kB/1.153kB 17:01:17 9038eaba24f8 Verifying Checksum 17:01:17 9038eaba24f8 Download complete 17:01:17 04a7796b82ca Downloading [==================================================>] 1.127kB/1.127kB 17:01:17 04a7796b82ca Verifying Checksum 17:01:17 04a7796b82ca Download complete 17:01:17 f270a5fd7930 Downloading [==========> ] 31.9MB/159.1MB 17:01:17 bda0b253c68f Downloading [==================================================>] 292B/292B 17:01:17 bda0b253c68f Verifying Checksum 17:01:17 bda0b253c68f Download complete 17:01:17 ad1782e4d1ef Extracting [> ] 557.1kB/180.4MB 17:01:17 ecc4de98d537 Extracting [===========================> ] 40.11MB/73.93MB 17:01:17 ecc4de98d537 Extracting [===========================> ] 40.11MB/73.93MB 17:01:17 ecc4de98d537 Extracting [===========================> ] 40.11MB/73.93MB 17:01:17 ecc4de98d537 Extracting [===========================> ] 40.11MB/73.93MB 17:01:17 353af139d39e Downloading [========================================> ] 199.5MB/246.5MB 17:01:17 b9357b55a7a5 Downloading [=> ] 3.001kB/127kB 17:01:17 b9357b55a7a5 Download complete 17:01:17 4c3047628e17 Downloading [==================================================>] 1.324kB/1.324kB 17:01:17 4c3047628e17 Verifying Checksum 17:01:17 4c3047628e17 Download complete 17:01:17 f270a5fd7930 Downloading [=============> ] 41.63MB/159.1MB 17:01:17 ecc4de98d537 Extracting [==============================> ] 44.56MB/73.93MB 17:01:17 ecc4de98d537 Extracting [==============================> ] 44.56MB/73.93MB 17:01:17 ad1782e4d1ef Extracting [=> ] 4.456MB/180.4MB 17:01:17 ecc4de98d537 Extracting [==============================> ] 44.56MB/73.93MB 17:01:17 ecc4de98d537 Extracting [==============================> ] 44.56MB/73.93MB 17:01:17 353af139d39e Downloading [===========================================> ] 214.6MB/246.5MB 17:01:17 6cf350721225 Downloading [> ] 539.6kB/98.32MB 17:01:17 f270a5fd7930 Downloading [==================> ] 58.39MB/159.1MB 17:01:17 ad1782e4d1ef Extracting [===> ] 12.26MB/180.4MB 17:01:17 353af139d39e Downloading [==============================================> ] 230.3MB/246.5MB 17:01:17 ecc4de98d537 Extracting [================================> ] 48.46MB/73.93MB 17:01:17 ecc4de98d537 Extracting [================================> ] 48.46MB/73.93MB 17:01:17 ecc4de98d537 Extracting [================================> ] 48.46MB/73.93MB 17:01:17 ecc4de98d537 Extracting [================================> ] 48.46MB/73.93MB 17:01:17 6cf350721225 Downloading [=> ] 2.162MB/98.32MB 17:01:17 f270a5fd7930 Downloading [======================> ] 72.99MB/159.1MB 17:01:17 ad1782e4d1ef Extracting [=======> ] 26.18MB/180.4MB 17:01:17 353af139d39e Downloading [=================================================> ] 244.9MB/246.5MB 17:01:17 ecc4de98d537 Extracting [====================================> ] 53.48MB/73.93MB 17:01:17 ecc4de98d537 Extracting [====================================> ] 53.48MB/73.93MB 17:01:17 ecc4de98d537 Extracting [====================================> ] 53.48MB/73.93MB 17:01:17 ecc4de98d537 Extracting [====================================> ] 53.48MB/73.93MB 17:01:17 353af139d39e Verifying Checksum 17:01:17 353af139d39e Download complete 17:01:17 6cf350721225 Downloading [==> ] 4.324MB/98.32MB 17:01:17 de723b4c7ed9 Downloading [==================================================>] 1.297kB/1.297kB 17:01:17 de723b4c7ed9 Download complete 17:01:17 1fe734c5fee3 Downloading [> ] 343kB/32.94MB 17:01:17 f270a5fd7930 Downloading [===========================> ] 87.59MB/159.1MB 17:01:17 ad1782e4d1ef Extracting [==========> ] 38.44MB/180.4MB 17:01:17 6cf350721225 Downloading [=======> ] 14.6MB/98.32MB 17:01:17 ecc4de98d537 Extracting [=======================================> ] 58.49MB/73.93MB 17:01:17 ecc4de98d537 Extracting [=======================================> ] 58.49MB/73.93MB 17:01:17 ecc4de98d537 Extracting [=======================================> ] 58.49MB/73.93MB 17:01:17 ecc4de98d537 Extracting [=======================================> ] 58.49MB/73.93MB 17:01:17 1fe734c5fee3 Downloading [========> ] 5.504MB/32.94MB 17:01:17 f270a5fd7930 Downloading [================================> ] 103.3MB/159.1MB 17:01:17 ad1782e4d1ef Extracting [=============> ] 49.02MB/180.4MB 17:01:17 6cf350721225 Downloading [==============> ] 28.65MB/98.32MB 17:01:18 ecc4de98d537 Extracting [=========================================> ] 61.83MB/73.93MB 17:01:18 ecc4de98d537 Extracting [=========================================> ] 61.83MB/73.93MB 17:01:18 ecc4de98d537 Extracting [=========================================> ] 61.83MB/73.93MB 17:01:18 ecc4de98d537 Extracting [=========================================> ] 61.83MB/73.93MB 17:01:18 1fe734c5fee3 Downloading [===================> ] 13.07MB/32.94MB 17:01:18 f270a5fd7930 Downloading [=====================================> ] 120MB/159.1MB 17:01:18 ad1782e4d1ef Extracting [================> ] 60.72MB/180.4MB 17:01:18 6cf350721225 Downloading [=====================> ] 42.71MB/98.32MB 17:01:18 1fe734c5fee3 Downloading [=====================================> ] 24.43MB/32.94MB 17:01:18 ecc4de98d537 Extracting [============================================> ] 65.18MB/73.93MB 17:01:18 ecc4de98d537 Extracting [============================================> ] 65.18MB/73.93MB 17:01:18 ecc4de98d537 Extracting [============================================> ] 65.18MB/73.93MB 17:01:18 ecc4de98d537 Extracting [============================================> ] 65.18MB/73.93MB 17:01:18 f270a5fd7930 Downloading [===========================================> ] 137.9MB/159.1MB 17:01:18 ad1782e4d1ef Extracting [===================> ] 70.75MB/180.4MB 17:01:18 1fe734c5fee3 Verifying Checksum 17:01:18 1fe734c5fee3 Download complete 17:01:18 6cf350721225 Downloading [============================> ] 55.69MB/98.32MB 17:01:18 c8e6f0452a8e Downloading [==================================================>] 1.076kB/1.076kB 17:01:18 c8e6f0452a8e Verifying Checksum 17:01:18 c8e6f0452a8e Download complete 17:01:18 0143f8517101 Downloading [============================> ] 3.003kB/5.324kB 17:01:18 0143f8517101 Download complete 17:01:18 f270a5fd7930 Downloading [===============================================> ] 150.8MB/159.1MB 17:01:18 ecc4de98d537 Extracting [==============================================> ] 69.07MB/73.93MB 17:01:18 ecc4de98d537 Extracting [==============================================> ] 69.07MB/73.93MB 17:01:18 ecc4de98d537 Extracting [==============================================> ] 69.07MB/73.93MB 17:01:18 ecc4de98d537 Extracting [==============================================> ] 69.07MB/73.93MB 17:01:18 ee69cc1a77e2 Downloading [============================> ] 3.003kB/5.312kB 17:01:18 ee69cc1a77e2 Download complete 17:01:18 ad1782e4d1ef Extracting [======================> ] 82.44MB/180.4MB 17:01:18 81667b400b57 Downloading [==================================================>] 1.034kB/1.034kB 17:01:18 81667b400b57 Download complete 17:01:18 f270a5fd7930 Verifying Checksum 17:01:18 f270a5fd7930 Download complete 17:01:18 ec3b6d0cc414 Downloading [==================================================>] 1.036kB/1.036kB 17:01:18 ec3b6d0cc414 Download complete 17:01:18 6cf350721225 Downloading [==================================> ] 68.12MB/98.32MB 17:01:18 a8d3998ab21c Downloading [==========> ] 3.002kB/13.9kB 17:01:18 a8d3998ab21c Downloading [==================================================>] 13.9kB/13.9kB 17:01:18 a8d3998ab21c Verifying Checksum 17:01:18 a8d3998ab21c Download complete 17:01:18 89d6e2ec6372 Downloading [==========> ] 3.002kB/13.79kB 17:01:18 89d6e2ec6372 Downloading [==================================================>] 13.79kB/13.79kB 17:01:18 89d6e2ec6372 Verifying Checksum 17:01:18 89d6e2ec6372 Download complete 17:01:18 80096f8bb25e Downloading [==================================================>] 2.238kB/2.238kB 17:01:18 80096f8bb25e Verifying Checksum 17:01:18 80096f8bb25e Download complete 17:01:18 cbd359ebc87d Downloading [==================================================>] 2.23kB/2.23kB 17:01:18 cbd359ebc87d Verifying Checksum 17:01:18 cbd359ebc87d Download complete 17:01:18 145e9fcd3938 Downloading [==================================================>] 294B/294B 17:01:18 145e9fcd3938 Verifying Checksum 17:01:18 145e9fcd3938 Download complete 17:01:18 4be774fd73e2 Downloading [=> ] 3.001kB/127.4kB 17:01:18 4be774fd73e2 Downloading [==================================================>] 127.4kB/127.4kB 17:01:18 4be774fd73e2 Verifying Checksum 17:01:18 4be774fd73e2 Download complete 17:01:18 ecc4de98d537 Extracting [=================================================> ] 72.97MB/73.93MB 17:01:18 ecc4de98d537 Extracting [=================================================> ] 72.97MB/73.93MB 17:01:18 ecc4de98d537 Extracting [=================================================> ] 72.97MB/73.93MB 17:01:18 ecc4de98d537 Extracting [=================================================> ] 72.97MB/73.93MB 17:01:18 71f834c33815 Downloading [==================================================>] 1.147kB/1.147kB 17:01:18 71f834c33815 Verifying Checksum 17:01:18 71f834c33815 Download complete 17:01:18 114f99593bd8 Downloading [==================================================>] 1.119kB/1.119kB 17:01:18 114f99593bd8 Verifying Checksum 17:01:18 114f99593bd8 Download complete 17:01:18 a40760cd2625 Downloading [> ] 539.6kB/84.46MB 17:01:18 4abcf2066143 Downloading [> ] 48.06kB/3.409MB 17:01:18 ad1782e4d1ef Extracting [========================> ] 89.69MB/180.4MB 17:01:18 6cf350721225 Downloading [=========================================> ] 82.18MB/98.32MB 17:01:18 ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB 17:01:18 ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB 17:01:18 ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB 17:01:18 ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB 17:01:18 4abcf2066143 Verifying Checksum 17:01:18 4abcf2066143 Download complete 17:01:18 4abcf2066143 Extracting [> ] 65.54kB/3.409MB 17:01:18 39aee5fd3406 Downloading [==================================================>] 142B/142B 17:01:18 39aee5fd3406 Verifying Checksum 17:01:18 39aee5fd3406 Download complete 17:01:18 6cf350721225 Verifying Checksum 17:01:18 6cf350721225 Download complete 17:01:18 592f1e71407c Downloading [> ] 48.06kB/3.184MB 17:01:18 a40760cd2625 Downloading [====> ] 7.568MB/84.46MB 17:01:18 ecc4de98d537 Pull complete 17:01:18 ecc4de98d537 Pull complete 17:01:18 ecc4de98d537 Pull complete 17:01:18 ecc4de98d537 Pull complete 17:01:18 ad1782e4d1ef Extracting [==========================> ] 95.26MB/180.4MB 17:01:18 665dfb3388a1 Extracting [==================================================>] 303B/303B 17:01:18 145e9fcd3938 Extracting [==================================================>] 294B/294B 17:01:18 bda0b253c68f Extracting [==================================================>] 292B/292B 17:01:18 bda0b253c68f Extracting [==================================================>] 292B/292B 17:01:18 665dfb3388a1 Extracting [==================================================>] 303B/303B 17:01:18 145e9fcd3938 Extracting [==================================================>] 294B/294B 17:01:18 66aec874ce0c Downloading [> ] 48.06kB/4.333MB 17:01:18 4abcf2066143 Extracting [=====> ] 393.2kB/3.409MB 17:01:18 592f1e71407c Verifying Checksum 17:01:18 592f1e71407c Download complete 17:01:18 bde37282dfba Downloading [==> ] 3.01kB/51.13kB 17:01:18 bde37282dfba Downloading [==================================================>] 51.13kB/51.13kB 17:01:18 bde37282dfba Verifying Checksum 17:01:18 bde37282dfba Download complete 17:01:18 a40760cd2625 Downloading [==========> ] 17.84MB/84.46MB 17:01:18 66aec874ce0c Verifying Checksum 17:01:18 66aec874ce0c Download complete 17:01:18 b6982d0733af Downloading [=====> ] 3.01kB/25.99kB 17:01:18 b6982d0733af Downloading [==================================================>] 25.99kB/25.99kB 17:01:18 b6982d0733af Verifying Checksum 17:01:18 b6982d0733af Download complete 17:01:18 1fe734c5fee3 Extracting [> ] 360.4kB/32.94MB 17:01:18 665dfb3388a1 Pull complete 17:01:18 ad1782e4d1ef Extracting [===========================> ] 98.6MB/180.4MB 17:01:18 ab3c28da242b Downloading [> ] 539.6kB/65.84MB 17:01:18 4abcf2066143 Extracting [==================================================>] 3.409MB/3.409MB 17:01:18 e4892977d944 Downloading [> ] 523.2kB/51.58MB 17:01:18 bda0b253c68f Pull complete 17:01:18 b9357b55a7a5 Extracting [============> ] 32.77kB/127kB 17:01:18 b9357b55a7a5 Extracting [==================================================>] 127kB/127kB 17:01:18 a40760cd2625 Downloading [==================> ] 30.82MB/84.46MB 17:01:18 4abcf2066143 Pull complete 17:01:18 39aee5fd3406 Extracting [==================================================>] 142B/142B 17:01:18 39aee5fd3406 Extracting [==================================================>] 142B/142B 17:01:18 145e9fcd3938 Pull complete 17:01:18 4be774fd73e2 Extracting [============> ] 32.77kB/127.4kB 17:01:18 4be774fd73e2 Extracting [==================================================>] 127.4kB/127.4kB 17:01:18 4be774fd73e2 Extracting [==================================================>] 127.4kB/127.4kB 17:01:18 ab3c28da242b Downloading [=======> ] 10.27MB/65.84MB 17:01:18 ad1782e4d1ef Extracting [============================> ] 101.4MB/180.4MB 17:01:18 f270a5fd7930 Extracting [> ] 557.1kB/159.1MB 17:01:18 e4892977d944 Downloading [========> ] 8.388MB/51.58MB 17:01:18 1fe734c5fee3 Extracting [====> ] 2.884MB/32.94MB 17:01:18 a40760cd2625 Downloading [===========================> ] 47.04MB/84.46MB 17:01:18 ab3c28da242b Downloading [================> ] 22.17MB/65.84MB 17:01:18 ad1782e4d1ef Extracting [=============================> ] 104.7MB/180.4MB 17:01:18 f270a5fd7930 Extracting [===> ] 10.03MB/159.1MB 17:01:18 e4892977d944 Downloading [=================> ] 18.35MB/51.58MB 17:01:18 39aee5fd3406 Pull complete 17:01:18 592f1e71407c Extracting [> ] 32.77kB/3.184MB 17:01:18 1fe734c5fee3 Extracting [========> ] 5.767MB/32.94MB 17:01:18 b9357b55a7a5 Pull complete 17:01:18 4c3047628e17 Extracting [==================================================>] 1.324kB/1.324kB 17:01:18 4c3047628e17 Extracting [==================================================>] 1.324kB/1.324kB 17:01:18 4be774fd73e2 Pull complete 17:01:18 71f834c33815 Extracting [==================================================>] 1.147kB/1.147kB 17:01:18 71f834c33815 Extracting [==================================================>] 1.147kB/1.147kB 17:01:18 a40760cd2625 Downloading [==================================> ] 58.93MB/84.46MB 17:01:18 ab3c28da242b Downloading [==========================> ] 34.6MB/65.84MB 17:01:18 f270a5fd7930 Extracting [======> ] 20.05MB/159.1MB 17:01:18 e4892977d944 Downloading [===========================> ] 28.31MB/51.58MB 17:01:18 ad1782e4d1ef Extracting [=============================> ] 107.5MB/180.4MB 17:01:18 1fe734c5fee3 Extracting [==========> ] 7.209MB/32.94MB 17:01:19 a40760cd2625 Downloading [=========================================> ] 70.83MB/84.46MB 17:01:19 592f1e71407c Extracting [=====> ] 327.7kB/3.184MB 17:01:19 4c3047628e17 Pull complete 17:01:19 ab3c28da242b Downloading [====================================> ] 47.58MB/65.84MB 17:01:19 e4892977d944 Downloading [=====================================> ] 38.8MB/51.58MB 17:01:19 f270a5fd7930 Extracting [========> ] 26.74MB/159.1MB 17:01:19 ad1782e4d1ef Extracting [==============================> ] 110.3MB/180.4MB 17:01:19 1fe734c5fee3 Extracting [===============> ] 10.09MB/32.94MB 17:01:19 a40760cd2625 Verifying Checksum 17:01:19 a40760cd2625 Download complete 17:01:19 71f834c33815 Pull complete 17:01:19 ef2b3f3f597e Downloading [============> ] 3.01kB/11.92kB 17:01:19 ef2b3f3f597e Downloading [==================================================>] 11.92kB/11.92kB 17:01:19 ef2b3f3f597e Verifying Checksum 17:01:19 ef2b3f3f597e Download complete 17:01:19 592f1e71407c Extracting [========================================> ] 2.589MB/3.184MB 17:01:19 27a3c8ebdfbf Downloading [==================================================>] 1.227kB/1.227kB 17:01:19 27a3c8ebdfbf Verifying Checksum 17:01:19 27a3c8ebdfbf Download complete 17:01:19 ab3c28da242b Downloading [===========================================> ] 56.77MB/65.84MB 17:01:19 e4892977d944 Downloading [=================================================> ] 50.85MB/51.58MB 17:01:19 9fa9226be034 Downloading [> ] 15.3kB/783kB 17:01:19 e4892977d944 Verifying Checksum 17:01:19 e4892977d944 Download complete 17:01:19 f270a5fd7930 Extracting [==========> ] 32.87MB/159.1MB 17:01:19 9fa9226be034 Downloading [==================================================>] 783kB/783kB 17:01:19 9fa9226be034 Download complete 17:01:19 9fa9226be034 Extracting [==> ] 32.77kB/783kB 17:01:19 6cf350721225 Extracting [> ] 557.1kB/98.32MB 17:01:19 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 17:01:19 ad1782e4d1ef Extracting [===============================> ] 113.1MB/180.4MB 17:01:19 1fe734c5fee3 Extracting [====================> ] 13.34MB/32.94MB 17:01:19 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 17:01:19 1617e25568b2 Verifying Checksum 17:01:19 1617e25568b2 Download complete 17:01:19 02203e3d6934 Downloading [> ] 539.6kB/56.02MB 17:01:19 592f1e71407c Extracting [=================================================> ] 3.178MB/3.184MB 17:01:19 a40760cd2625 Extracting [> ] 557.1kB/84.46MB 17:01:19 8be4b7271108 Downloading [> ] 523.2kB/50.82MB 17:01:19 ab3c28da242b Downloading [=================================================> ] 64.88MB/65.84MB 17:01:19 ab3c28da242b Verifying Checksum 17:01:19 ab3c28da242b Download complete 17:01:19 592f1e71407c Extracting [==================================================>] 3.184MB/3.184MB 17:01:19 f270a5fd7930 Extracting [============> ] 38.99MB/159.1MB 17:01:19 6cf350721225 Extracting [==> ] 5.571MB/98.32MB 17:01:19 ad1782e4d1ef Extracting [===============================> ] 115.3MB/180.4MB 17:01:19 1fe734c5fee3 Extracting [========================> ] 15.86MB/32.94MB 17:01:19 a40760cd2625 Extracting [====> ] 7.799MB/84.46MB 17:01:19 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 17:01:19 9fa9226be034 Extracting [==================================================>] 783kB/783kB 17:01:19 592f1e71407c Pull complete 17:01:19 66aec874ce0c Extracting [> ] 65.54kB/4.333MB 17:01:19 f270a5fd7930 Extracting [==============> ] 46.79MB/159.1MB 17:01:19 6cf350721225 Extracting [=====> ] 11.14MB/98.32MB 17:01:19 ad1782e4d1ef Extracting [================================> ] 118.1MB/180.4MB 17:01:19 1fe734c5fee3 Extracting [============================> ] 19.1MB/32.94MB 17:01:19 a40760cd2625 Extracting [=========> ] 15.6MB/84.46MB 17:01:19 9fa9226be034 Pull complete 17:01:19 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 17:01:19 f270a5fd7930 Extracting [================> ] 53.48MB/159.1MB 17:01:19 6cf350721225 Extracting [========> ] 16.71MB/98.32MB 17:01:19 66aec874ce0c Extracting [===> ] 262.1kB/4.333MB 17:01:19 ad1782e4d1ef Extracting [=================================> ] 121.4MB/180.4MB 17:01:19 a40760cd2625 Extracting [===========> ] 20.05MB/84.46MB 17:01:19 1fe734c5fee3 Extracting [===============================> ] 20.91MB/32.94MB 17:01:19 f270a5fd7930 Extracting [==================> ] 58.49MB/159.1MB 17:01:19 6cf350721225 Extracting [============> ] 25.07MB/98.32MB 17:01:19 66aec874ce0c Extracting [================================> ] 2.818MB/4.333MB 17:01:19 ad1782e4d1ef Extracting [==================================> ] 123.1MB/180.4MB 17:01:19 a40760cd2625 Extracting [===============> ] 26.74MB/84.46MB 17:01:19 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 17:01:19 66aec874ce0c Extracting [==================================================>] 4.333MB/4.333MB 17:01:19 1fe734c5fee3 Extracting [=================================> ] 22.35MB/32.94MB 17:01:19 f270a5fd7930 Extracting [====================> ] 64.06MB/159.1MB 17:01:19 6cf350721225 Extracting [================> ] 32.31MB/98.32MB 17:01:19 ad1782e4d1ef Extracting [==================================> ] 125.9MB/180.4MB 17:01:19 66aec874ce0c Pull complete 17:01:19 a40760cd2625 Extracting [==================> ] 31.2MB/84.46MB 17:01:19 bde37282dfba Extracting [================================> ] 32.77kB/51.13kB 17:01:19 bde37282dfba Extracting [==================================================>] 51.13kB/51.13kB 17:01:19 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 17:01:19 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 17:01:19 f270a5fd7930 Extracting [=====================> ] 68.52MB/159.1MB 17:01:19 6cf350721225 Extracting [====================> ] 39.55MB/98.32MB 17:01:19 a40760cd2625 Extracting [======================> ] 37.32MB/84.46MB 17:01:19 ad1782e4d1ef Extracting [===================================> ] 128.1MB/180.4MB 17:01:19 1fe734c5fee3 Extracting [====================================> ] 23.79MB/32.94MB 17:01:19 8be4b7271108 Downloading [==> ] 2.62MB/50.82MB 17:01:19 02203e3d6934 Downloading [=======> ] 8.109MB/56.02MB 17:01:19 8becc689631f Downloading [==================================================>] 604B/604B 17:01:19 8becc689631f Verifying Checksum 17:01:19 8becc689631f Download complete 17:01:19 ceaeea15c1bf Downloading [==================================================>] 2.678kB/2.678kB 17:01:19 ceaeea15c1bf Verifying Checksum 17:01:19 ceaeea15c1bf Download complete 17:01:19 f270a5fd7930 Extracting [=======================> ] 74.09MB/159.1MB 17:01:19 1617e25568b2 Pull complete 17:01:19 6cf350721225 Extracting [======================> ] 44.56MB/98.32MB 17:01:19 564720d6ed13 Downloading [================================================> ] 3.011kB/3.089kB 17:01:19 564720d6ed13 Downloading [==================================================>] 3.089kB/3.089kB 17:01:19 564720d6ed13 Verifying Checksum 17:01:19 564720d6ed13 Download complete 17:01:19 ad1782e4d1ef Extracting [===================================> ] 129.8MB/180.4MB 17:01:19 a40760cd2625 Extracting [===========================> ] 45.68MB/84.46MB 17:01:19 1fd5d47e09da Downloading [=====================================> ] 3.011kB/4.022kB 17:01:19 1fd5d47e09da Downloading [==================================================>] 4.022kB/4.022kB 17:01:19 1fd5d47e09da Verifying Checksum 17:01:19 1fd5d47e09da Download complete 17:01:19 1afe4a0d7329 Downloading [==================================================>] 1.438kB/1.438kB 17:01:19 1afe4a0d7329 Verifying Checksum 17:01:19 1afe4a0d7329 Download complete 17:01:20 bde37282dfba Pull complete 17:01:20 1fe734c5fee3 Extracting [======================================> ] 25.23MB/32.94MB 17:01:20 bd55ccfa5aad Downloading [=> ] 3.009kB/138.1kB 17:01:20 8be4b7271108 Downloading [========> ] 8.388MB/50.82MB 17:01:20 b6982d0733af Extracting [==================================================>] 25.99kB/25.99kB 17:01:20 b6982d0733af Extracting [==================================================>] 25.99kB/25.99kB 17:01:20 bd55ccfa5aad Downloading [==================================================>] 138.1kB/138.1kB 17:01:20 bd55ccfa5aad Verifying Checksum 17:01:20 02203e3d6934 Downloading [===========> ] 12.43MB/56.02MB 17:01:20 bd55ccfa5aad Download complete 17:01:20 54f884861fc1 Downloading [==================================================>] 100B/100B 17:01:20 54f884861fc1 Verifying Checksum 17:01:20 54f884861fc1 Download complete 17:01:20 f270a5fd7930 Extracting [=========================> ] 79.66MB/159.1MB 17:01:20 6cf350721225 Extracting [=========================> ] 50.69MB/98.32MB 17:01:20 b09316e948c6 Downloading [==================================================>] 719B/719B 17:01:20 b09316e948c6 Verifying Checksum 17:01:20 b09316e948c6 Download complete 17:01:20 ad1782e4d1ef Extracting [====================================> ] 132.6MB/180.4MB 17:01:20 a40760cd2625 Extracting [===============================> ] 52.36MB/84.46MB 17:01:20 10ac4908093d Downloading [> ] 310.2kB/30.43MB 17:01:20 1fe734c5fee3 Extracting [=========================================> ] 27.03MB/32.94MB 17:01:20 8be4b7271108 Downloading [==================> ] 18.87MB/50.82MB 17:01:20 02203e3d6934 Downloading [=====================> ] 23.79MB/56.02MB 17:01:20 f270a5fd7930 Extracting [==========================> ] 85.79MB/159.1MB 17:01:20 6cf350721225 Extracting [============================> ] 55.71MB/98.32MB 17:01:20 a40760cd2625 Extracting [==================================> ] 58.49MB/84.46MB 17:01:20 ad1782e4d1ef Extracting [=====================================> ] 135.9MB/180.4MB 17:01:20 02203e3d6934 Downloading [==============================> ] 34.6MB/56.02MB 17:01:20 8be4b7271108 Downloading [===========================> ] 28.31MB/50.82MB 17:01:20 b6982d0733af Pull complete 17:01:20 1fe734c5fee3 Extracting [=============================================> ] 29.92MB/32.94MB 17:01:20 f270a5fd7930 Extracting [============================> ] 91.91MB/159.1MB 17:01:20 6cf350721225 Extracting [===============================> ] 62.39MB/98.32MB 17:01:20 a40760cd2625 Extracting [=======================================> ] 66.29MB/84.46MB 17:01:20 ad1782e4d1ef Extracting [======================================> ] 138.1MB/180.4MB 17:01:20 ab3c28da242b Extracting [> ] 557.1kB/65.84MB 17:01:20 1fe734c5fee3 Extracting [===============================================> ] 31MB/32.94MB 17:01:20 6cf350721225 Extracting [=================================> ] 66.85MB/98.32MB 17:01:20 f270a5fd7930 Extracting [==============================> ] 98.6MB/159.1MB 17:01:20 a40760cd2625 Extracting [===========================================> ] 74.09MB/84.46MB 17:01:20 1fe734c5fee3 Extracting [==================================================>] 32.94MB/32.94MB 17:01:20 ad1782e4d1ef Extracting [=======================================> ] 141.5MB/180.4MB 17:01:20 1fe734c5fee3 Pull complete 17:01:20 ab3c28da242b Extracting [==> ] 2.785MB/65.84MB 17:01:20 c8e6f0452a8e Extracting [==================================================>] 1.076kB/1.076kB 17:01:20 c8e6f0452a8e Extracting [==================================================>] 1.076kB/1.076kB 17:01:20 a40760cd2625 Extracting [=================================================> ] 84.12MB/84.46MB 17:01:20 6cf350721225 Extracting [======================================> ] 75.76MB/98.32MB 17:01:20 a40760cd2625 Extracting [==================================================>] 84.46MB/84.46MB 17:01:20 f270a5fd7930 Extracting [================================> ] 104.7MB/159.1MB 17:01:20 a40760cd2625 Pull complete 17:01:20 114f99593bd8 Extracting [==================================================>] 1.119kB/1.119kB 17:01:20 114f99593bd8 Extracting [==================================================>] 1.119kB/1.119kB 17:01:20 ad1782e4d1ef Extracting [=======================================> ] 143.2MB/180.4MB 17:01:20 ab3c28da242b Extracting [=====> ] 6.685MB/65.84MB 17:01:20 6cf350721225 Extracting [===========================================> ] 85.23MB/98.32MB 17:01:20 f270a5fd7930 Extracting [===================================> ] 112MB/159.1MB 17:01:20 c8e6f0452a8e Pull complete 17:01:20 0143f8517101 Extracting [==================================================>] 5.324kB/5.324kB 17:01:20 0143f8517101 Extracting [==================================================>] 5.324kB/5.324kB 17:01:20 ad1782e4d1ef Extracting [========================================> ] 147.1MB/180.4MB 17:01:20 114f99593bd8 Pull complete 17:01:20 6cf350721225 Extracting [=================================================> ] 96.93MB/98.32MB 17:01:20 api Pulled 17:01:20 6cf350721225 Extracting [==================================================>] 98.32MB/98.32MB 17:01:20 f270a5fd7930 Extracting [=====================================> ] 120.9MB/159.1MB 17:01:20 ab3c28da242b Extracting [========> ] 10.58MB/65.84MB 17:01:20 6cf350721225 Pull complete 17:01:20 de723b4c7ed9 Extracting [==================================================>] 1.297kB/1.297kB 17:01:20 de723b4c7ed9 Extracting [==================================================>] 1.297kB/1.297kB 17:01:20 0143f8517101 Pull complete 17:01:20 ee69cc1a77e2 Extracting [==================================================>] 5.312kB/5.312kB 17:01:20 ee69cc1a77e2 Extracting [==================================================>] 5.312kB/5.312kB 17:01:20 ad1782e4d1ef Extracting [=========================================> ] 150.4MB/180.4MB 17:01:20 f270a5fd7930 Extracting [========================================> ] 130.4MB/159.1MB 17:01:20 ab3c28da242b Extracting [=========> ] 12.26MB/65.84MB 17:01:20 de723b4c7ed9 Pull complete 17:01:20 ad1782e4d1ef Extracting [==========================================> ] 154.3MB/180.4MB 17:01:20 pap Pulled 17:01:20 f270a5fd7930 Extracting [===========================================> ] 137MB/159.1MB 17:01:20 ee69cc1a77e2 Pull complete 17:01:20 81667b400b57 Extracting [==================================================>] 1.034kB/1.034kB 17:01:20 81667b400b57 Extracting [==================================================>] 1.034kB/1.034kB 17:01:20 ab3c28da242b Extracting [===========> ] 15.6MB/65.84MB 17:01:21 ad1782e4d1ef Extracting [===========================================> ] 158.2MB/180.4MB 17:01:21 f270a5fd7930 Extracting [=============================================> ] 145.4MB/159.1MB 17:01:21 ab3c28da242b Extracting [===============> ] 20.05MB/65.84MB 17:01:21 81667b400b57 Pull complete 17:01:21 ec3b6d0cc414 Extracting [==================================================>] 1.036kB/1.036kB 17:01:21 ec3b6d0cc414 Extracting [==================================================>] 1.036kB/1.036kB 17:01:21 ad1782e4d1ef Extracting [=============================================> ] 162.7MB/180.4MB 17:01:21 f270a5fd7930 Extracting [================================================> ] 154.9MB/159.1MB 17:01:21 ab3c28da242b Extracting [===================> ] 25.07MB/65.84MB 17:01:21 f270a5fd7930 Extracting [==================================================>] 159.1MB/159.1MB 17:01:21 ec3b6d0cc414 Pull complete 17:01:21 a8d3998ab21c Extracting [==================================================>] 13.9kB/13.9kB 17:01:21 a8d3998ab21c Extracting [==================================================>] 13.9kB/13.9kB 17:01:21 f270a5fd7930 Pull complete 17:01:21 9038eaba24f8 Extracting [==================================================>] 1.153kB/1.153kB 17:01:21 9038eaba24f8 Extracting [==================================================>] 1.153kB/1.153kB 17:01:21 ad1782e4d1ef Extracting [=============================================> ] 164.9MB/180.4MB 17:01:21 ab3c28da242b Extracting [=====================> ] 28.41MB/65.84MB 17:01:21 a8d3998ab21c Pull complete 17:01:21 89d6e2ec6372 Extracting [==================================================>] 13.79kB/13.79kB 17:01:21 89d6e2ec6372 Extracting [==================================================>] 13.79kB/13.79kB 17:01:21 ad1782e4d1ef Extracting [===============================================> ] 169.9MB/180.4MB 17:01:21 ab3c28da242b Extracting [========================> ] 32.31MB/65.84MB 17:01:21 9038eaba24f8 Pull complete 17:01:21 04a7796b82ca Extracting [==================================================>] 1.127kB/1.127kB 17:01:21 04a7796b82ca Extracting [==================================================>] 1.127kB/1.127kB 17:01:21 ad1782e4d1ef Extracting [===============================================> ] 172.1MB/180.4MB 17:01:21 ab3c28da242b Extracting [============================> ] 37.32MB/65.84MB 17:01:21 89d6e2ec6372 Pull complete 17:01:21 04a7796b82ca Pull complete 17:01:21 80096f8bb25e Extracting [==================================================>] 2.238kB/2.238kB 17:01:21 80096f8bb25e Extracting [==================================================>] 2.238kB/2.238kB 17:01:21 simulator Pulled 17:01:21 ad1782e4d1ef Extracting [================================================> ] 173.8MB/180.4MB 17:01:21 ab3c28da242b Extracting [==============================> ] 40.67MB/65.84MB 17:01:21 ab3c28da242b Extracting [==================================> ] 45.68MB/65.84MB 17:01:21 ad1782e4d1ef Extracting [================================================> ] 176.6MB/180.4MB 17:01:21 80096f8bb25e Pull complete 17:01:21 cbd359ebc87d Extracting [==================================================>] 2.23kB/2.23kB 17:01:21 cbd359ebc87d Extracting [==================================================>] 2.23kB/2.23kB 17:01:21 ab3c28da242b Extracting [=====================================> ] 49.58MB/65.84MB 17:01:21 ad1782e4d1ef Extracting [=================================================> ] 178.3MB/180.4MB 17:01:21 cbd359ebc87d Pull complete 17:01:21 policy-db-migrator Pulled 17:01:21 ab3c28da242b Extracting [=======================================> ] 52.36MB/65.84MB 17:01:21 ad1782e4d1ef Extracting [=================================================> ] 179.9MB/180.4MB 17:01:21 ad1782e4d1ef Extracting [==================================================>] 180.4MB/180.4MB 17:01:22 ab3c28da242b Extracting [===========================================> ] 56.82MB/65.84MB 17:01:22 ad1782e4d1ef Pull complete 17:01:22 bc8105c6553b Extracting [===================> ] 32.77kB/84.13kB 17:01:22 bc8105c6553b Extracting [==================================================>] 84.13kB/84.13kB 17:01:22 bc8105c6553b Extracting [==================================================>] 84.13kB/84.13kB 17:01:22 ab3c28da242b Extracting [=============================================> ] 60.16MB/65.84MB 17:01:22 bc8105c6553b Pull complete 17:01:22 929241f867bb Extracting [==================================================>] 92B/92B 17:01:22 929241f867bb Extracting [==================================================>] 92B/92B 17:01:22 8be4b7271108 Downloading [==================================> ] 34.6MB/50.82MB 17:01:22 02203e3d6934 Downloading [=====================================> ] 41.63MB/56.02MB 17:01:22 ab3c28da242b Extracting [================================================> ] 63.5MB/65.84MB 17:01:22 10ac4908093d Downloading [===> ] 2.178MB/30.43MB 17:01:22 929241f867bb Pull complete 17:01:22 37728a7352e6 Extracting [==================================================>] 92B/92B 17:01:22 37728a7352e6 Extracting [==================================================>] 92B/92B 17:01:22 8be4b7271108 Downloading [=============================================> ] 46.66MB/50.82MB 17:01:22 02203e3d6934 Downloading [================================================> ] 54.07MB/56.02MB 17:01:22 ab3c28da242b Extracting [=================================================> ] 65.73MB/65.84MB 17:01:22 10ac4908093d Downloading [===============> ] 9.649MB/30.43MB 17:01:22 ab3c28da242b Extracting [==================================================>] 65.84MB/65.84MB 17:01:22 02203e3d6934 Verifying Checksum 17:01:22 02203e3d6934 Download complete 17:01:22 8be4b7271108 Verifying Checksum 17:01:22 8be4b7271108 Download complete 17:01:22 44779101e748 Downloading [==================================================>] 1.744kB/1.744kB 17:01:22 44779101e748 Verifying Checksum 17:01:22 44779101e748 Download complete 17:01:22 a721db3e3f3d Downloading [> ] 64.45kB/5.526MB 17:01:22 1850a929b84a Downloading [==================================================>] 149B/149B 17:01:22 1850a929b84a Verifying Checksum 17:01:22 1850a929b84a Download complete 17:01:22 ab3c28da242b Pull complete 17:01:22 397a918c7da3 Downloading [==================================================>] 327B/327B 17:01:22 397a918c7da3 Verifying Checksum 17:01:22 397a918c7da3 Download complete 17:01:22 806be17e856d Downloading [> ] 539.6kB/89.72MB 17:01:22 10ac4908093d Downloading [=====================================> ] 22.72MB/30.43MB 17:01:22 02203e3d6934 Extracting [> ] 557.1kB/56.02MB 17:01:22 37728a7352e6 Pull complete 17:01:22 3f40c7aa46a6 Extracting [==================================================>] 302B/302B 17:01:22 3f40c7aa46a6 Extracting [==================================================>] 302B/302B 17:01:22 a721db3e3f3d Verifying Checksum 17:01:22 a721db3e3f3d Download complete 17:01:22 634de6c90876 Downloading [===========================================> ] 3.011kB/3.49kB 17:01:22 634de6c90876 Downloading [==================================================>] 3.49kB/3.49kB 17:01:22 634de6c90876 Verifying Checksum 17:01:22 634de6c90876 Download complete 17:01:22 10ac4908093d Verifying Checksum 17:01:22 10ac4908093d Download complete 17:01:22 cd00854cfb1a Downloading [=====================> ] 3.011kB/6.971kB 17:01:22 cd00854cfb1a Downloading [==================================================>] 6.971kB/6.971kB 17:01:22 cd00854cfb1a Verifying Checksum 17:01:22 cd00854cfb1a Download complete 17:01:22 806be17e856d Downloading [=====> ] 9.19MB/89.72MB 17:01:22 e4892977d944 Extracting [> ] 524.3kB/51.58MB 17:01:22 02203e3d6934 Extracting [===> ] 3.899MB/56.02MB 17:01:22 806be17e856d Downloading [==========> ] 18.38MB/89.72MB 17:01:22 3f40c7aa46a6 Pull complete 17:01:22 56f27190e824 Downloading [> ] 375kB/37.11MB 17:01:22 56f27190e824 Downloading [> ] 375kB/37.11MB 17:01:22 10ac4908093d Extracting [> ] 327.7kB/30.43MB 17:01:22 e4892977d944 Extracting [=> ] 1.049MB/51.58MB 17:01:22 02203e3d6934 Extracting [====> ] 5.571MB/56.02MB 17:01:22 8e70b9b9b078 Downloading [> ] 531.7kB/272.7MB 17:01:22 8e70b9b9b078 Downloading [> ] 531.7kB/272.7MB 17:01:22 806be17e856d Downloading [=================> ] 31.36MB/89.72MB 17:01:22 56f27190e824 Downloading [==================> ] 13.97MB/37.11MB 17:01:22 56f27190e824 Downloading [==================> ] 13.97MB/37.11MB 17:01:22 10ac4908093d Extracting [=====> ] 3.604MB/30.43MB 17:01:22 02203e3d6934 Extracting [======> ] 7.242MB/56.02MB 17:01:22 e4892977d944 Extracting [=> ] 1.573MB/51.58MB 17:01:22 806be17e856d Downloading [=======================> ] 42.71MB/89.72MB 17:01:22 56f27190e824 Downloading [=====================================> ] 27.93MB/37.11MB 17:01:22 56f27190e824 Downloading [=====================================> ] 27.93MB/37.11MB 17:01:22 353af139d39e Extracting [> ] 557.1kB/246.5MB 17:01:22 10ac4908093d Extracting [==========> ] 6.226MB/30.43MB 17:01:22 8e70b9b9b078 Downloading [=> ] 6.937MB/272.7MB 17:01:22 8e70b9b9b078 Downloading [=> ] 6.937MB/272.7MB 17:01:22 02203e3d6934 Extracting [========> ] 10.03MB/56.02MB 17:01:22 56f27190e824 Verifying Checksum 17:01:22 56f27190e824 Verifying Checksum 17:01:22 56f27190e824 Download complete 17:01:22 56f27190e824 Download complete 17:01:22 806be17e856d Downloading [===============================> ] 56.77MB/89.72MB 17:01:23 8e70b9b9b078 Downloading [===> ] 17.06MB/272.7MB 17:01:23 8e70b9b9b078 Downloading [===> ] 17.06MB/272.7MB 17:01:23 e4892977d944 Extracting [==> ] 2.621MB/51.58MB 17:01:23 02203e3d6934 Extracting [===========> ] 12.81MB/56.02MB 17:01:23 10ac4908093d Extracting [=============> ] 8.192MB/30.43MB 17:01:23 353af139d39e Extracting [> ] 1.114MB/246.5MB 17:01:23 732c9ebb730c Downloading [================================> ] 719B/1.111kB 17:01:23 732c9ebb730c Download complete 17:01:23 732c9ebb730c Download complete 17:01:23 806be17e856d Downloading [========================================> ] 72.45MB/89.72MB 17:01:23 56f27190e824 Extracting [> ] 393.2kB/37.11MB 17:01:23 56f27190e824 Extracting [> ] 393.2kB/37.11MB 17:01:23 8e70b9b9b078 Downloading [=====> ] 28.85MB/272.7MB 17:01:23 8e70b9b9b078 Downloading [=====> ] 28.85MB/272.7MB 17:01:23 353af139d39e Extracting [==> ] 10.58MB/246.5MB 17:01:23 ed746366f1b8 Downloading [> ] 86.68kB/8.378MB 17:01:23 ed746366f1b8 Downloading [> ] 86.68kB/8.378MB 17:01:23 10ac4908093d Extracting [=================> ] 10.49MB/30.43MB 17:01:23 02203e3d6934 Extracting [=============> ] 15.04MB/56.02MB 17:01:23 e4892977d944 Extracting [===> ] 3.67MB/51.58MB 17:01:23 806be17e856d Downloading [===============================================> ] 84.88MB/89.72MB 17:01:23 56f27190e824 Extracting [===> ] 2.753MB/37.11MB 17:01:23 56f27190e824 Extracting [===> ] 2.753MB/37.11MB 17:01:23 806be17e856d Verifying Checksum 17:01:23 806be17e856d Download complete 17:01:23 8e70b9b9b078 Downloading [=======> ] 39.59MB/272.7MB 17:01:23 8e70b9b9b078 Downloading [=======> ] 39.59MB/272.7MB 17:01:23 353af139d39e Extracting [===> ] 17.83MB/246.5MB 17:01:23 ed746366f1b8 Downloading [========================================> ] 6.769MB/8.378MB 17:01:23 ed746366f1b8 Downloading [========================================> ] 6.769MB/8.378MB 17:01:23 10ac4908093d Extracting [======================> ] 13.43MB/30.43MB 17:01:23 02203e3d6934 Extracting [===============> ] 17.27MB/56.02MB 17:01:23 ed746366f1b8 Downloading [==================================================>] 8.378MB/8.378MB 17:01:23 ed746366f1b8 Downloading [==================================================>] 8.378MB/8.378MB 17:01:23 ed746366f1b8 Verifying Checksum 17:01:23 ed746366f1b8 Download complete 17:01:23 ed746366f1b8 Download complete 17:01:23 e4892977d944 Extracting [====> ] 4.194MB/51.58MB 17:01:23 56f27190e824 Extracting [=======> ] 5.898MB/37.11MB 17:01:23 56f27190e824 Extracting [=======> ] 5.898MB/37.11MB 17:01:23 10894799ccd9 Downloading [=> ] 719B/21.28kB 17:01:23 10894799ccd9 Downloading [=> ] 719B/21.28kB 17:01:23 10894799ccd9 Verifying Checksum 17:01:23 10894799ccd9 Download complete 17:01:23 10894799ccd9 Verifying Checksum 17:01:23 10894799ccd9 Download complete 17:01:23 8e70b9b9b078 Downloading [=========> ] 51.91MB/272.7MB 17:01:23 8e70b9b9b078 Downloading [=========> ] 51.91MB/272.7MB 17:01:23 353af139d39e Extracting [=====> ] 26.74MB/246.5MB 17:01:23 10ac4908093d Extracting [===========================> ] 16.71MB/30.43MB 17:01:23 8d377259558c Downloading [> ] 442.4kB/43.24MB 17:01:23 8d377259558c Downloading [> ] 442.4kB/43.24MB 17:01:23 02203e3d6934 Extracting [==================> ] 21.17MB/56.02MB 17:01:23 56f27190e824 Extracting [===========> ] 8.651MB/37.11MB 17:01:23 56f27190e824 Extracting [===========> ] 8.651MB/37.11MB 17:01:23 e7688095d1e6 Downloading [================================> ] 719B/1.106kB 17:01:23 e7688095d1e6 Downloading [================================> ] 719B/1.106kB 17:01:23 e7688095d1e6 Downloading [==================================================>] 1.106kB/1.106kB 17:01:23 e7688095d1e6 Downloading [==================================================>] 1.106kB/1.106kB 17:01:23 e7688095d1e6 Verifying Checksum 17:01:23 e7688095d1e6 Verifying Checksum 17:01:23 e7688095d1e6 Download complete 17:01:23 e7688095d1e6 Download complete 17:01:23 8e70b9b9b078 Downloading [===========> ] 65.28MB/272.7MB 17:01:23 8e70b9b9b078 Downloading [===========> ] 65.28MB/272.7MB 17:01:23 353af139d39e Extracting [======> ] 33.42MB/246.5MB 17:01:23 e4892977d944 Extracting [=======> ] 7.34MB/51.58MB 17:01:23 10ac4908093d Extracting [===============================> ] 19.33MB/30.43MB 17:01:23 8d377259558c Downloading [==============> ] 12.34MB/43.24MB 17:01:23 8d377259558c Downloading [==============> ] 12.34MB/43.24MB 17:01:23 02203e3d6934 Extracting [===================> ] 22.28MB/56.02MB 17:01:23 8e70b9b9b078 Downloading [==============> ] 76.56MB/272.7MB 17:01:23 8e70b9b9b078 Downloading [==============> ] 76.56MB/272.7MB 17:01:23 56f27190e824 Extracting [==============> ] 10.62MB/37.11MB 17:01:23 56f27190e824 Extracting [==============> ] 10.62MB/37.11MB 17:01:23 8eab815b3593 Downloading [==========================================> ] 721B/853B 17:01:23 8eab815b3593 Downloading [==========================================> ] 721B/853B 17:01:23 8eab815b3593 Downloading [==================================================>] 853B/853B 17:01:23 8eab815b3593 Downloading [==================================================>] 853B/853B 17:01:23 8eab815b3593 Verifying Checksum 17:01:23 8eab815b3593 Verifying Checksum 17:01:23 8eab815b3593 Download complete 17:01:23 8eab815b3593 Download complete 17:01:23 353af139d39e Extracting [========> ] 40.67MB/246.5MB 17:01:23 8d377259558c Downloading [=============================> ] 25.54MB/43.24MB 17:01:23 8d377259558c Downloading [=============================> ] 25.54MB/43.24MB 17:01:23 e4892977d944 Extracting [=========> ] 9.437MB/51.58MB 17:01:23 10ac4908093d Extracting [=====================================> ] 22.61MB/30.43MB 17:01:23 02203e3d6934 Extracting [======================> ] 25.07MB/56.02MB 17:01:23 8e70b9b9b078 Downloading [================> ] 91.01MB/272.7MB 17:01:23 8e70b9b9b078 Downloading [================> ] 91.01MB/272.7MB 17:01:23 56f27190e824 Extracting [===================> ] 14.16MB/37.11MB 17:01:23 56f27190e824 Extracting [===================> ] 14.16MB/37.11MB 17:01:23 353af139d39e Extracting [=========> ] 47.91MB/246.5MB 17:01:23 00ded6dd259e Downloading [==================================================>] 98B/98B 17:01:23 00ded6dd259e Downloading [==================================================>] 98B/98B 17:01:23 00ded6dd259e Verifying Checksum 17:01:23 00ded6dd259e Verifying Checksum 17:01:23 00ded6dd259e Download complete 17:01:23 00ded6dd259e Download complete 17:01:23 8d377259558c Downloading [==============================================> ] 40.09MB/43.24MB 17:01:23 8d377259558c Downloading [==============================================> ] 40.09MB/43.24MB 17:01:23 8d377259558c Download complete 17:01:23 8d377259558c Verifying Checksum 17:01:23 10ac4908093d Extracting [=========================================> ] 25.56MB/30.43MB 17:01:23 02203e3d6934 Extracting [=========================> ] 28.97MB/56.02MB 17:01:23 8e70b9b9b078 Downloading [==================> ] 100.6MB/272.7MB 17:01:23 8e70b9b9b078 Downloading [==================> ] 100.6MB/272.7MB 17:01:23 e4892977d944 Extracting [===========> ] 11.53MB/51.58MB 17:01:23 56f27190e824 Extracting [=======================> ] 17.3MB/37.11MB 17:01:23 56f27190e824 Extracting [=======================> ] 17.3MB/37.11MB 17:01:23 296f622c8150 Downloading [==================================================>] 172B/172B 17:01:23 296f622c8150 Downloading [==================================================>] 172B/172B 17:01:23 296f622c8150 Verifying Checksum 17:01:23 296f622c8150 Verifying Checksum 17:01:23 353af139d39e Extracting [===========> ] 54.59MB/246.5MB 17:01:23 296f622c8150 Download complete 17:01:23 296f622c8150 Download complete 17:01:23 4ee3050cff6b Downloading [> ] 3.423kB/230.6kB 17:01:23 4ee3050cff6b Downloading [> ] 3.423kB/230.6kB 17:01:23 4ee3050cff6b Verifying Checksum 17:01:23 4ee3050cff6b Download complete 17:01:23 4ee3050cff6b Verifying Checksum 17:01:23 4ee3050cff6b Download complete 17:01:23 02203e3d6934 Extracting [=================================> ] 37.88MB/56.02MB 17:01:23 10ac4908093d Extracting [===========================================> ] 26.54MB/30.43MB 17:01:23 8e70b9b9b078 Downloading [====================> ] 113MB/272.7MB 17:01:23 8e70b9b9b078 Downloading [====================> ] 113MB/272.7MB 17:01:23 e4892977d944 Extracting [=============> ] 13.63MB/51.58MB 17:01:23 56f27190e824 Extracting [===========================> ] 20.45MB/37.11MB 17:01:23 56f27190e824 Extracting [===========================> ] 20.45MB/37.11MB 17:01:23 353af139d39e Extracting [============> ] 61.28MB/246.5MB 17:01:23 519f42193ec8 Downloading [> ] 535.8kB/121.9MB 17:01:23 5df3538dc51e Downloading [=========> ] 720B/3.627kB 17:01:23 5df3538dc51e Downloading [==================================================>] 3.627kB/3.627kB 17:01:23 5df3538dc51e Verifying Checksum 17:01:23 5df3538dc51e Download complete 17:01:23 02203e3d6934 Extracting [============================================> ] 50.14MB/56.02MB 17:01:23 e4892977d944 Extracting [===============> ] 15.73MB/51.58MB 17:01:23 8e70b9b9b078 Downloading [=======================> ] 126.4MB/272.7MB 17:01:23 8e70b9b9b078 Downloading [=======================> ] 126.4MB/272.7MB 17:01:23 10ac4908093d Extracting [===============================================> ] 28.84MB/30.43MB 17:01:23 56f27190e824 Extracting [================================> ] 23.99MB/37.11MB 17:01:23 56f27190e824 Extracting [================================> ] 23.99MB/37.11MB 17:01:23 353af139d39e Extracting [=============> ] 68.52MB/246.5MB 17:01:23 519f42193ec8 Downloading [=====> ] 12.82MB/121.9MB 17:01:24 98acab318002 Downloading [> ] 539.9kB/121.9MB 17:01:24 8e70b9b9b078 Downloading [=========================> ] 137.7MB/272.7MB 17:01:24 8e70b9b9b078 Downloading [=========================> ] 137.7MB/272.7MB 17:01:24 e4892977d944 Extracting [================> ] 17.3MB/51.58MB 17:01:24 10ac4908093d Extracting [=================================================> ] 30.15MB/30.43MB 17:01:24 02203e3d6934 Extracting [=================================================> ] 55.71MB/56.02MB 17:01:24 56f27190e824 Extracting [====================================> ] 26.74MB/37.11MB 17:01:24 56f27190e824 Extracting [====================================> ] 26.74MB/37.11MB 17:01:24 02203e3d6934 Extracting [==================================================>] 56.02MB/56.02MB 17:01:24 519f42193ec8 Downloading [==========> ] 26.19MB/121.9MB 17:01:24 353af139d39e Extracting [===============> ] 75.2MB/246.5MB 17:01:24 10ac4908093d Extracting [==================================================>] 30.43MB/30.43MB 17:01:24 98acab318002 Downloading [===> ] 9.661MB/121.9MB 17:01:24 353af139d39e Extracting [===============> ] 76.87MB/246.5MB 17:01:24 519f42193ec8 Downloading [=============> ] 32.09MB/121.9MB 17:01:24 98acab318002 Downloading [====> ] 11.82MB/121.9MB 17:01:24 02203e3d6934 Pull complete 17:01:24 56f27190e824 Extracting [======================================> ] 28.7MB/37.11MB 17:01:24 56f27190e824 Extracting [======================================> ] 28.7MB/37.11MB 17:01:24 8e70b9b9b078 Downloading [===========================> ] 147.9MB/272.7MB 17:01:24 8e70b9b9b078 Downloading [===========================> ] 147.9MB/272.7MB 17:01:24 e4892977d944 Extracting [==================> ] 19.4MB/51.58MB 17:01:24 10ac4908093d Pull complete 17:01:24 353af139d39e Extracting [================> ] 83MB/246.5MB 17:01:24 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 17:01:24 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 17:01:24 98acab318002 Downloading [=========> ] 24.14MB/121.9MB 17:01:24 519f42193ec8 Downloading [===================> ] 47.15MB/121.9MB 17:01:24 8e70b9b9b078 Downloading [=============================> ] 162.3MB/272.7MB 17:01:24 8e70b9b9b078 Downloading [=============================> ] 162.3MB/272.7MB 17:01:24 56f27190e824 Extracting [=========================================> ] 31.06MB/37.11MB 17:01:24 56f27190e824 Extracting [=========================================> ] 31.06MB/37.11MB 17:01:24 8be4b7271108 Extracting [> ] 524.3kB/50.82MB 17:01:24 353af139d39e Extracting [==================> ] 89.69MB/246.5MB 17:01:24 e4892977d944 Extracting [====================> ] 20.97MB/51.58MB 17:01:24 98acab318002 Downloading [==============> ] 36.48MB/121.9MB 17:01:24 519f42193ec8 Downloading [=======================> ] 57.36MB/121.9MB 17:01:24 8e70b9b9b078 Downloading [===============================> ] 172.6MB/272.7MB 17:01:24 44779101e748 Pull complete 17:01:24 8e70b9b9b078 Downloading [===============================> ] 172.6MB/272.7MB 17:01:24 a721db3e3f3d Extracting [> ] 65.54kB/5.526MB 17:01:24 8be4b7271108 Extracting [===> ] 3.67MB/50.82MB 17:01:24 56f27190e824 Extracting [=============================================> ] 33.82MB/37.11MB 17:01:24 56f27190e824 Extracting [=============================================> ] 33.82MB/37.11MB 17:01:24 353af139d39e Extracting [===================> ] 95.26MB/246.5MB 17:01:24 e4892977d944 Extracting [=======================> ] 24.12MB/51.58MB 17:01:24 98acab318002 Downloading [===================> ] 47.73MB/121.9MB 17:01:24 519f42193ec8 Downloading [===========================> ] 67.04MB/121.9MB 17:01:24 a721db3e3f3d Extracting [==> ] 262.1kB/5.526MB 17:01:24 8e70b9b9b078 Downloading [==================================> ] 185.5MB/272.7MB 17:01:24 8e70b9b9b078 Downloading [==================================> ] 185.5MB/272.7MB 17:01:24 8be4b7271108 Extracting [======> ] 6.291MB/50.82MB 17:01:24 353af139d39e Extracting [====================> ] 100.8MB/246.5MB 17:01:24 56f27190e824 Extracting [===============================================> ] 35MB/37.11MB 17:01:24 e4892977d944 Extracting [=========================> ] 26.74MB/51.58MB 17:01:24 56f27190e824 Extracting [===============================================> ] 35MB/37.11MB 17:01:24 98acab318002 Downloading [======================> ] 54.71MB/121.9MB 17:01:24 519f42193ec8 Downloading [===============================> ] 77.23MB/121.9MB 17:01:24 a721db3e3f3d Extracting [=====================> ] 2.425MB/5.526MB 17:01:24 8e70b9b9b078 Downloading [===================================> ] 195.6MB/272.7MB 17:01:24 8e70b9b9b078 Downloading [===================================> ] 195.6MB/272.7MB 17:01:24 353af139d39e Extracting [=====================> ] 107MB/246.5MB 17:01:24 8be4b7271108 Extracting [========> ] 8.913MB/50.82MB 17:01:24 56f27190e824 Extracting [==================================================>] 37.11MB/37.11MB 17:01:24 56f27190e824 Extracting [==================================================>] 37.11MB/37.11MB 17:01:24 e4892977d944 Extracting [============================> ] 29.88MB/51.58MB 17:01:24 98acab318002 Downloading [========================> ] 60.62MB/121.9MB 17:01:24 519f42193ec8 Downloading [====================================> ] 88.54MB/121.9MB 17:01:24 8e70b9b9b078 Downloading [======================================> ] 208.5MB/272.7MB 17:01:24 8e70b9b9b078 Downloading [======================================> ] 208.5MB/272.7MB 17:01:24 a721db3e3f3d Extracting [=======================================> ] 4.391MB/5.526MB 17:01:24 56f27190e824 Pull complete 17:01:24 56f27190e824 Pull complete 17:01:24 353af139d39e Extracting [=======================> ] 114.2MB/246.5MB 17:01:24 8be4b7271108 Extracting [===========> ] 11.53MB/50.82MB 17:01:24 519f42193ec8 Downloading [========================================> ] 99.29MB/121.9MB 17:01:24 98acab318002 Downloading [===========================> ] 68.14MB/121.9MB 17:01:24 e4892977d944 Extracting [===============================> ] 31.98MB/51.58MB 17:01:25 8e70b9b9b078 Downloading [========================================> ] 220.3MB/272.7MB 17:01:25 8e70b9b9b078 Downloading [========================================> ] 220.3MB/272.7MB 17:01:25 353af139d39e Extracting [========================> ] 120.3MB/246.5MB 17:01:25 8be4b7271108 Extracting [=============> ] 14.16MB/50.82MB 17:01:25 a721db3e3f3d Extracting [==========================================> ] 4.719MB/5.526MB 17:01:25 519f42193ec8 Downloading [=============================================> ] 111.1MB/121.9MB 17:01:25 98acab318002 Downloading [===============================> ] 75.67MB/121.9MB 17:01:25 e4892977d944 Extracting [=================================> ] 34.08MB/51.58MB 17:01:25 8e70b9b9b078 Downloading [==========================================> ] 232.1MB/272.7MB 17:01:25 8e70b9b9b078 Downloading [==========================================> ] 232.1MB/272.7MB 17:01:25 a721db3e3f3d Extracting [==================================================>] 5.526MB/5.526MB 17:01:25 353af139d39e Extracting [=========================> ] 127.6MB/246.5MB 17:01:25 519f42193ec8 Verifying Checksum 17:01:25 519f42193ec8 Download complete 17:01:25 8be4b7271108 Extracting [===============> ] 16.25MB/50.82MB 17:01:25 e4892977d944 Extracting [===================================> ] 36.7MB/51.58MB 17:01:25 8e70b9b9b078 Downloading [=============================================> ] 248.2MB/272.7MB 17:01:25 8e70b9b9b078 Downloading [=============================================> ] 248.2MB/272.7MB 17:01:25 98acab318002 Downloading [==================================> ] 83.18MB/121.9MB 17:01:25 353af139d39e Extracting [============================> ] 138.1MB/246.5MB 17:01:25 878348106a95 Downloading [==========> ] 720B/3.447kB 17:01:25 878348106a95 Downloading [==================================================>] 3.447kB/3.447kB 17:01:25 878348106a95 Verifying Checksum 17:01:25 878348106a95 Download complete 17:01:25 353af139d39e Extracting [============================> ] 140.4MB/246.5MB 17:01:25 8e70b9b9b078 Downloading [==============================================> ] 254.1MB/272.7MB 17:01:25 8e70b9b9b078 Downloading [==============================================> ] 254.1MB/272.7MB 17:01:25 8e70b9b9b078 Downloading [==============================================> ] 254.7MB/272.7MB 17:01:25 8e70b9b9b078 Downloading [==============================================> ] 254.7MB/272.7MB 17:01:25 98acab318002 Downloading [===================================> ] 85.87MB/121.9MB 17:01:25 353af139d39e Extracting [============================> ] 140.9MB/246.5MB 17:01:25 e4892977d944 Extracting [=====================================> ] 38.8MB/51.58MB 17:01:25 8be4b7271108 Extracting [==================> ] 18.35MB/50.82MB 17:01:25 98acab318002 Downloading [========================================> ] 99.32MB/121.9MB 17:01:25 8e70b9b9b078 Downloading [================================================> ] 265.4MB/272.7MB 17:01:25 8e70b9b9b078 Downloading [================================================> ] 265.4MB/272.7MB 17:01:25 353af139d39e Extracting [==============================> ] 150.4MB/246.5MB 17:01:25 8e70b9b9b078 Verifying Checksum 17:01:25 8e70b9b9b078 Download complete 17:01:25 8e70b9b9b078 Verifying Checksum 17:01:25 8e70b9b9b078 Download complete 17:01:25 e4892977d944 Extracting [========================================> ] 41.42MB/51.58MB 17:01:25 8be4b7271108 Extracting [=======================> ] 23.59MB/50.82MB 17:01:25 a721db3e3f3d Pull complete 17:01:25 98acab318002 Downloading [=============================================> ] 111.7MB/121.9MB 17:01:25 353af139d39e Extracting [===============================> ] 157.6MB/246.5MB 17:01:25 e4892977d944 Extracting [==========================================> ] 44.04MB/51.58MB 17:01:25 98acab318002 Verifying Checksum 17:01:25 8e70b9b9b078 Extracting [> ] 557.1kB/272.7MB 17:01:25 8e70b9b9b078 Extracting [> ] 557.1kB/272.7MB 17:01:25 8be4b7271108 Extracting [==========================> ] 26.74MB/50.82MB 17:01:25 353af139d39e Extracting [=================================> ] 167.1MB/246.5MB 17:01:25 e4892977d944 Extracting [=============================================> ] 46.66MB/51.58MB 17:01:25 8e70b9b9b078 Extracting [=> ] 7.799MB/272.7MB 17:01:25 8e70b9b9b078 Extracting [=> ] 7.799MB/272.7MB 17:01:25 8be4b7271108 Extracting [=================================> ] 34.08MB/50.82MB 17:01:25 1850a929b84a Extracting [==================================================>] 149B/149B 17:01:25 1850a929b84a Extracting [==================================================>] 149B/149B 17:01:25 353af139d39e Extracting [==================================> ] 171MB/246.5MB 17:01:25 8e70b9b9b078 Extracting [===> ] 16.71MB/272.7MB 17:01:25 8e70b9b9b078 Extracting [===> ] 16.71MB/272.7MB 17:01:25 8be4b7271108 Extracting [=========================================> ] 42.47MB/50.82MB 17:01:26 353af139d39e Extracting [====================================> ] 179.4MB/246.5MB 17:01:26 8e70b9b9b078 Extracting [====> ] 22.28MB/272.7MB 17:01:26 8e70b9b9b078 Extracting [====> ] 22.28MB/272.7MB 17:01:26 8be4b7271108 Extracting [=============================================> ] 46.66MB/50.82MB 17:01:26 353af139d39e Extracting [====================================> ] 181MB/246.5MB 17:01:26 8be4b7271108 Extracting [==================================================>] 50.82MB/50.82MB 17:01:26 353af139d39e Extracting [======================================> ] 190MB/246.5MB 17:01:26 353af139d39e Extracting [=======================================> ] 193.3MB/246.5MB 17:01:26 8e70b9b9b078 Extracting [====> ] 26.18MB/272.7MB 17:01:26 8e70b9b9b078 Extracting [====> ] 26.18MB/272.7MB 17:01:26 1850a929b84a Pull complete 17:01:26 e4892977d944 Extracting [================================================> ] 49.81MB/51.58MB 17:01:26 397a918c7da3 Extracting [==================================================>] 327B/327B 17:01:26 397a918c7da3 Extracting [==================================================>] 327B/327B 17:01:26 353af139d39e Extracting [========================================> ] 198.9MB/246.5MB 17:01:26 e4892977d944 Extracting [=================================================> ] 51.38MB/51.58MB 17:01:26 8e70b9b9b078 Extracting [=====> ] 28.97MB/272.7MB 17:01:26 8e70b9b9b078 Extracting [=====> ] 28.97MB/272.7MB 17:01:26 e4892977d944 Extracting [==================================================>] 51.58MB/51.58MB 17:01:26 8be4b7271108 Pull complete 17:01:26 353af139d39e Extracting [=========================================> ] 206.7MB/246.5MB 17:01:26 8becc689631f Extracting [==================================================>] 604B/604B 17:01:26 8becc689631f Extracting [==================================================>] 604B/604B 17:01:26 397a918c7da3 Pull complete 17:01:26 8e70b9b9b078 Extracting [=====> ] 30.64MB/272.7MB 17:01:26 8e70b9b9b078 Extracting [=====> ] 30.64MB/272.7MB 17:01:26 e4892977d944 Pull complete 17:01:26 ef2b3f3f597e Extracting [==================================================>] 11.92kB/11.92kB 17:01:26 ef2b3f3f597e Extracting [==================================================>] 11.92kB/11.92kB 17:01:27 353af139d39e Extracting [==========================================> ] 211.1MB/246.5MB 17:01:27 806be17e856d Extracting [> ] 557.1kB/89.72MB 17:01:27 8e70b9b9b078 Extracting [=======> ] 42.34MB/272.7MB 17:01:27 8e70b9b9b078 Extracting [=======> ] 42.34MB/272.7MB 17:01:27 8becc689631f Pull complete 17:01:27 ceaeea15c1bf Extracting [==================================================>] 2.678kB/2.678kB 17:01:27 ceaeea15c1bf Extracting [==================================================>] 2.678kB/2.678kB 17:01:27 353af139d39e Extracting [============================================> ] 219.5MB/246.5MB 17:01:27 ef2b3f3f597e Pull complete 17:01:27 27a3c8ebdfbf Extracting [==================================================>] 1.227kB/1.227kB 17:01:27 27a3c8ebdfbf Extracting [==================================================>] 1.227kB/1.227kB 17:01:27 806be17e856d Extracting [==> ] 3.899MB/89.72MB 17:01:27 8e70b9b9b078 Extracting [=========> ] 53.48MB/272.7MB 17:01:27 8e70b9b9b078 Extracting [=========> ] 53.48MB/272.7MB 17:01:27 353af139d39e Extracting [==============================================> ] 229.5MB/246.5MB 17:01:27 8e70b9b9b078 Extracting [===========> ] 61.28MB/272.7MB 17:01:27 8e70b9b9b078 Extracting [===========> ] 61.28MB/272.7MB 17:01:27 806be17e856d Extracting [===> ] 6.685MB/89.72MB 17:01:27 353af139d39e Extracting [================================================> ] 237.3MB/246.5MB 17:01:27 8e70b9b9b078 Extracting [============> ] 69.07MB/272.7MB 17:01:27 8e70b9b9b078 Extracting [============> ] 69.07MB/272.7MB 17:01:27 353af139d39e Extracting [==================================================>] 246.5MB/246.5MB 17:01:27 806be17e856d Extracting [=====> ] 9.47MB/89.72MB 17:01:27 8e70b9b9b078 Extracting [=============> ] 76.32MB/272.7MB 17:01:27 8e70b9b9b078 Extracting [=============> ] 76.32MB/272.7MB 17:01:27 806be17e856d Extracting [=======> ] 13.37MB/89.72MB 17:01:27 353af139d39e Pull complete 17:01:27 27a3c8ebdfbf Pull complete 17:01:27 ceaeea15c1bf Pull complete 17:01:27 564720d6ed13 Extracting [==================================================>] 3.089kB/3.089kB 17:01:27 564720d6ed13 Extracting [==================================================>] 3.089kB/3.089kB 17:01:27 apex-pdp Pulled 17:01:27 grafana Pulled 17:01:27 8e70b9b9b078 Extracting [===============> ] 86.9MB/272.7MB 17:01:27 8e70b9b9b078 Extracting [===============> ] 86.9MB/272.7MB 17:01:27 806be17e856d Extracting [==========> ] 18.38MB/89.72MB 17:01:27 564720d6ed13 Pull complete 17:01:27 8e70b9b9b078 Extracting [==================> ] 99.16MB/272.7MB 17:01:27 8e70b9b9b078 Extracting [==================> ] 99.16MB/272.7MB 17:01:27 1fd5d47e09da Extracting [==================================================>] 4.022kB/4.022kB 17:01:27 1fd5d47e09da Extracting [==================================================>] 4.022kB/4.022kB 17:01:27 806be17e856d Extracting [============> ] 22.28MB/89.72MB 17:01:27 8e70b9b9b078 Extracting [===================> ] 105.8MB/272.7MB 17:01:27 8e70b9b9b078 Extracting [===================> ] 105.8MB/272.7MB 17:01:27 1fd5d47e09da Pull complete 17:01:27 1afe4a0d7329 Extracting [==================================================>] 1.438kB/1.438kB 17:01:27 1afe4a0d7329 Extracting [==================================================>] 1.438kB/1.438kB 17:01:27 806be17e856d Extracting [==============> ] 25.62MB/89.72MB 17:01:27 8e70b9b9b078 Extracting [====================> ] 113.1MB/272.7MB 17:01:27 8e70b9b9b078 Extracting [====================> ] 113.1MB/272.7MB 17:01:27 806be17e856d Extracting [================> ] 29.52MB/89.72MB 17:01:27 1afe4a0d7329 Pull complete 17:01:27 bd55ccfa5aad Extracting [===========> ] 32.77kB/138.1kB 17:01:27 bd55ccfa5aad Extracting [==================================================>] 138.1kB/138.1kB 17:01:27 bd55ccfa5aad Extracting [==================================================>] 138.1kB/138.1kB 17:01:28 8e70b9b9b078 Extracting [=====================> ] 118.1MB/272.7MB 17:01:28 8e70b9b9b078 Extracting [=====================> ] 118.1MB/272.7MB 17:01:28 806be17e856d Extracting [==================> ] 32.87MB/89.72MB 17:01:28 bd55ccfa5aad Pull complete 17:01:28 54f884861fc1 Extracting [==================================================>] 100B/100B 17:01:28 54f884861fc1 Extracting [==================================================>] 100B/100B 17:01:28 8e70b9b9b078 Extracting [======================> ] 123.1MB/272.7MB 17:01:28 8e70b9b9b078 Extracting [======================> ] 123.1MB/272.7MB 17:01:28 806be17e856d Extracting [====================> ] 36.21MB/89.72MB 17:01:28 54f884861fc1 Pull complete 17:01:28 8e70b9b9b078 Extracting [=======================> ] 129.8MB/272.7MB 17:01:28 8e70b9b9b078 Extracting [=======================> ] 129.8MB/272.7MB 17:01:28 b09316e948c6 Extracting [==================================================>] 719B/719B 17:01:28 b09316e948c6 Extracting [==================================================>] 719B/719B 17:01:28 806be17e856d Extracting [======================> ] 39.55MB/89.72MB 17:01:28 8e70b9b9b078 Extracting [========================> ] 134.3MB/272.7MB 17:01:28 8e70b9b9b078 Extracting [========================> ] 134.3MB/272.7MB 17:01:28 806be17e856d Extracting [=======================> ] 41.78MB/89.72MB 17:01:28 8e70b9b9b078 Extracting [=========================> ] 138.1MB/272.7MB 17:01:28 8e70b9b9b078 Extracting [=========================> ] 138.1MB/272.7MB 17:01:28 806be17e856d Extracting [=========================> ] 46.24MB/89.72MB 17:01:28 8e70b9b9b078 Extracting [=========================> ] 141.5MB/272.7MB 17:01:28 8e70b9b9b078 Extracting [=========================> ] 141.5MB/272.7MB 17:01:28 806be17e856d Extracting [=============================> ] 53.48MB/89.72MB 17:01:28 8e70b9b9b078 Extracting [==========================> ] 145.9MB/272.7MB 17:01:28 8e70b9b9b078 Extracting [==========================> ] 145.9MB/272.7MB 17:01:28 806be17e856d Extracting [=================================> ] 59.6MB/89.72MB 17:01:28 8e70b9b9b078 Extracting [===========================> ] 150.4MB/272.7MB 17:01:28 8e70b9b9b078 Extracting [===========================> ] 150.4MB/272.7MB 17:01:28 806be17e856d Extracting [====================================> ] 65.73MB/89.72MB 17:01:28 8e70b9b9b078 Extracting [============================> ] 154.3MB/272.7MB 17:01:28 8e70b9b9b078 Extracting [============================> ] 154.3MB/272.7MB 17:01:29 806be17e856d Extracting [======================================> ] 68.52MB/89.72MB 17:01:29 8e70b9b9b078 Extracting [=============================> ] 158.2MB/272.7MB 17:01:29 8e70b9b9b078 Extracting [=============================> ] 158.2MB/272.7MB 17:01:29 806be17e856d Extracting [=======================================> ] 71.3MB/89.72MB 17:01:29 8e70b9b9b078 Extracting [=============================> ] 161.5MB/272.7MB 17:01:29 8e70b9b9b078 Extracting [=============================> ] 161.5MB/272.7MB 17:01:29 806be17e856d Extracting [========================================> ] 72.42MB/89.72MB 17:01:29 8e70b9b9b078 Extracting [==============================> ] 163.8MB/272.7MB 17:01:29 8e70b9b9b078 Extracting [==============================> ] 163.8MB/272.7MB 17:01:29 806be17e856d Extracting [=========================================> ] 74.09MB/89.72MB 17:01:29 8e70b9b9b078 Extracting [==============================> ] 166.6MB/272.7MB 17:01:29 8e70b9b9b078 Extracting [==============================> ] 166.6MB/272.7MB 17:01:29 806be17e856d Extracting [============================================> ] 80.22MB/89.72MB 17:01:29 8e70b9b9b078 Extracting [===============================> ] 172.1MB/272.7MB 17:01:29 8e70b9b9b078 Extracting [===============================> ] 172.1MB/272.7MB 17:01:29 8e70b9b9b078 Extracting [================================> ] 175.5MB/272.7MB 17:01:29 8e70b9b9b078 Extracting [================================> ] 175.5MB/272.7MB 17:01:29 806be17e856d Extracting [==============================================> ] 83.56MB/89.72MB 17:01:29 8e70b9b9b078 Extracting [================================> ] 179.4MB/272.7MB 17:01:29 8e70b9b9b078 Extracting [================================> ] 179.4MB/272.7MB 17:01:29 806be17e856d Extracting [===============================================> ] 85.79MB/89.72MB 17:01:29 806be17e856d Extracting [================================================> ] 86.9MB/89.72MB 17:01:29 8e70b9b9b078 Extracting [=================================> ] 180.5MB/272.7MB 17:01:29 8e70b9b9b078 Extracting [=================================> ] 180.5MB/272.7MB 17:01:29 8e70b9b9b078 Extracting [=================================> ] 181.6MB/272.7MB 17:01:29 8e70b9b9b078 Extracting [=================================> ] 181.6MB/272.7MB 17:01:30 8e70b9b9b078 Extracting [=================================> ] 182.2MB/272.7MB 17:01:30 8e70b9b9b078 Extracting [=================================> ] 182.2MB/272.7MB 17:01:30 806be17e856d Extracting [=================================================> ] 89.13MB/89.72MB 17:01:30 b09316e948c6 Pull complete 17:01:30 806be17e856d Extracting [==================================================>] 89.72MB/89.72MB 17:01:30 8e70b9b9b078 Extracting [=================================> ] 183.3MB/272.7MB 17:01:30 8e70b9b9b078 Extracting [=================================> ] 183.3MB/272.7MB 17:01:30 8e70b9b9b078 Extracting [=================================> ] 184.4MB/272.7MB 17:01:30 8e70b9b9b078 Extracting [=================================> ] 184.4MB/272.7MB 17:01:31 806be17e856d Pull complete 17:01:31 prometheus Pulled 17:01:31 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 17:01:31 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 17:01:31 8e70b9b9b078 Extracting [=================================> ] 184.9MB/272.7MB 17:01:31 8e70b9b9b078 Extracting [=================================> ] 184.9MB/272.7MB 17:01:31 8e70b9b9b078 Extracting [==================================> ] 186.1MB/272.7MB 17:01:31 8e70b9b9b078 Extracting [==================================> ] 186.1MB/272.7MB 17:01:31 634de6c90876 Pull complete 17:01:31 cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB 17:01:31 cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB 17:01:31 8e70b9b9b078 Extracting [==================================> ] 188.8MB/272.7MB 17:01:31 8e70b9b9b078 Extracting [==================================> ] 188.8MB/272.7MB 17:01:31 cd00854cfb1a Pull complete 17:01:31 mariadb Pulled 17:01:31 8e70b9b9b078 Extracting [===================================> ] 193.3MB/272.7MB 17:01:31 8e70b9b9b078 Extracting [===================================> ] 193.3MB/272.7MB 17:01:31 8e70b9b9b078 Extracting [====================================> ] 200.5MB/272.7MB 17:01:31 8e70b9b9b078 Extracting [====================================> ] 200.5MB/272.7MB 17:01:31 8e70b9b9b078 Extracting [=====================================> ] 203.9MB/272.7MB 17:01:31 8e70b9b9b078 Extracting [=====================================> ] 203.9MB/272.7MB 17:01:31 8e70b9b9b078 Extracting [=====================================> ] 205.6MB/272.7MB 17:01:31 8e70b9b9b078 Extracting [=====================================> ] 205.6MB/272.7MB 17:01:31 8e70b9b9b078 Extracting [======================================> ] 208.3MB/272.7MB 17:01:31 8e70b9b9b078 Extracting [======================================> ] 208.3MB/272.7MB 17:01:32 8e70b9b9b078 Extracting [======================================> ] 210.6MB/272.7MB 17:01:32 8e70b9b9b078 Extracting [======================================> ] 210.6MB/272.7MB 17:01:32 8e70b9b9b078 Extracting [=======================================> ] 212.8MB/272.7MB 17:01:32 8e70b9b9b078 Extracting [=======================================> ] 212.8MB/272.7MB 17:01:32 8e70b9b9b078 Extracting [=======================================> ] 215MB/272.7MB 17:01:32 8e70b9b9b078 Extracting [=======================================> ] 215MB/272.7MB 17:01:32 8e70b9b9b078 Extracting [=======================================> ] 216.7MB/272.7MB 17:01:32 8e70b9b9b078 Extracting [=======================================> ] 216.7MB/272.7MB 17:01:32 8e70b9b9b078 Extracting [========================================> ] 219.5MB/272.7MB 17:01:32 8e70b9b9b078 Extracting [========================================> ] 219.5MB/272.7MB 17:01:32 8e70b9b9b078 Extracting [========================================> ] 221.7MB/272.7MB 17:01:32 8e70b9b9b078 Extracting [========================================> ] 221.7MB/272.7MB 17:01:32 8e70b9b9b078 Extracting [========================================> ] 223.4MB/272.7MB 17:01:32 8e70b9b9b078 Extracting [========================================> ] 223.4MB/272.7MB 17:01:32 8e70b9b9b078 Extracting [=========================================> ] 226.2MB/272.7MB 17:01:32 8e70b9b9b078 Extracting [=========================================> ] 226.2MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [==========================================> ] 229.5MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [==========================================> ] 229.5MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [==========================================> ] 232.3MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [==========================================> ] 232.3MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [===========================================> ] 235.1MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [===========================================> ] 235.1MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [===========================================> ] 236.7MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [===========================================> ] 236.7MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [===========================================> ] 238.4MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [===========================================> ] 238.4MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [============================================> ] 240.6MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [============================================> ] 240.6MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [============================================> ] 242.3MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [============================================> ] 242.3MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [=============================================> ] 246.8MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [=============================================> ] 246.8MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [=============================================> ] 250.7MB/272.7MB 17:01:33 8e70b9b9b078 Extracting [=============================================> ] 250.7MB/272.7MB 17:01:34 8e70b9b9b078 Extracting [==============================================> ] 255.1MB/272.7MB 17:01:34 8e70b9b9b078 Extracting [==============================================> ] 255.1MB/272.7MB 17:01:34 8e70b9b9b078 Extracting [================================================> ] 262.9MB/272.7MB 17:01:34 8e70b9b9b078 Extracting [================================================> ] 262.9MB/272.7MB 17:01:34 8e70b9b9b078 Extracting [=================================================> ] 269.6MB/272.7MB 17:01:34 8e70b9b9b078 Extracting [=================================================> ] 269.6MB/272.7MB 17:01:34 8e70b9b9b078 Extracting [==================================================>] 272.7MB/272.7MB 17:01:34 8e70b9b9b078 Extracting [==================================================>] 272.7MB/272.7MB 17:01:36 8e70b9b9b078 Pull complete 17:01:36 8e70b9b9b078 Pull complete 17:01:36 732c9ebb730c Extracting [==================================================>] 1.111kB/1.111kB 17:01:36 732c9ebb730c Extracting [==================================================>] 1.111kB/1.111kB 17:01:36 732c9ebb730c Extracting [==================================================>] 1.111kB/1.111kB 17:01:36 732c9ebb730c Extracting [==================================================>] 1.111kB/1.111kB 17:01:37 732c9ebb730c Pull complete 17:01:37 732c9ebb730c Pull complete 17:01:38 ed746366f1b8 Extracting [> ] 98.3kB/8.378MB 17:01:38 ed746366f1b8 Extracting [> ] 98.3kB/8.378MB 17:01:38 ed746366f1b8 Extracting [======================> ] 3.736MB/8.378MB 17:01:38 ed746366f1b8 Extracting [======================> ] 3.736MB/8.378MB 17:01:38 ed746366f1b8 Extracting [==================================================>] 8.378MB/8.378MB 17:01:38 ed746366f1b8 Extracting [==================================================>] 8.378MB/8.378MB 17:01:39 ed746366f1b8 Pull complete 17:01:39 ed746366f1b8 Pull complete 17:01:40 10894799ccd9 Extracting [==================================================>] 21.28kB/21.28kB 17:01:40 10894799ccd9 Extracting [==================================================>] 21.28kB/21.28kB 17:01:40 10894799ccd9 Extracting [==================================================>] 21.28kB/21.28kB 17:01:40 10894799ccd9 Extracting [==================================================>] 21.28kB/21.28kB 17:01:41 10894799ccd9 Pull complete 17:01:41 10894799ccd9 Pull complete 17:01:44 8d377259558c Extracting [> ] 458.8kB/43.24MB 17:01:44 8d377259558c Extracting [> ] 458.8kB/43.24MB 17:01:44 8d377259558c Extracting [==================> ] 15.6MB/43.24MB 17:01:44 8d377259558c Extracting [==================> ] 15.6MB/43.24MB 17:01:44 8d377259558c Extracting [=================================> ] 28.9MB/43.24MB 17:01:44 8d377259558c Extracting [=================================> ] 28.9MB/43.24MB 17:01:44 8d377259558c Extracting [==================================================>] 43.24MB/43.24MB 17:01:44 8d377259558c Extracting [==================================================>] 43.24MB/43.24MB 17:01:46 8d377259558c Pull complete 17:01:46 8d377259558c Pull complete 17:01:46 e7688095d1e6 Extracting [==================================================>] 1.106kB/1.106kB 17:01:46 e7688095d1e6 Extracting [==================================================>] 1.106kB/1.106kB 17:01:46 e7688095d1e6 Extracting [==================================================>] 1.106kB/1.106kB 17:01:46 e7688095d1e6 Extracting [==================================================>] 1.106kB/1.106kB 17:01:46 e7688095d1e6 Pull complete 17:01:46 e7688095d1e6 Pull complete 17:01:46 8eab815b3593 Extracting [==================================================>] 853B/853B 17:01:46 8eab815b3593 Extracting [==================================================>] 853B/853B 17:01:46 8eab815b3593 Extracting [==================================================>] 853B/853B 17:01:46 8eab815b3593 Extracting [==================================================>] 853B/853B 17:01:46 8eab815b3593 Pull complete 17:01:46 8eab815b3593 Pull complete 17:01:46 00ded6dd259e Extracting [==================================================>] 98B/98B 17:01:46 00ded6dd259e Extracting [==================================================>] 98B/98B 17:01:46 00ded6dd259e Extracting [==================================================>] 98B/98B 17:01:46 00ded6dd259e Extracting [==================================================>] 98B/98B 17:01:46 00ded6dd259e Pull complete 17:01:46 00ded6dd259e Pull complete 17:01:46 296f622c8150 Extracting [==================================================>] 172B/172B 17:01:46 296f622c8150 Extracting [==================================================>] 172B/172B 17:01:46 296f622c8150 Extracting [==================================================>] 172B/172B 17:01:46 296f622c8150 Extracting [==================================================>] 172B/172B 17:01:46 296f622c8150 Pull complete 17:01:46 296f622c8150 Pull complete 17:01:46 4ee3050cff6b Extracting [=======> ] 32.77kB/230.6kB 17:01:46 4ee3050cff6b Extracting [=======> ] 32.77kB/230.6kB 17:01:46 4ee3050cff6b Extracting [==================================================>] 230.6kB/230.6kB 17:01:46 4ee3050cff6b Extracting [==================================================>] 230.6kB/230.6kB 17:01:46 4ee3050cff6b Pull complete 17:01:46 4ee3050cff6b Pull complete 17:01:46 98acab318002 Extracting [> ] 557.1kB/121.9MB 17:01:47 519f42193ec8 Extracting [> ] 557.1kB/121.9MB 17:01:47 98acab318002 Extracting [=====> ] 13.37MB/121.9MB 17:01:47 519f42193ec8 Extracting [===> ] 9.47MB/121.9MB 17:01:47 98acab318002 Extracting [============> ] 30.08MB/121.9MB 17:01:47 519f42193ec8 Extracting [=========> ] 22.28MB/121.9MB 17:01:47 98acab318002 Extracting [===================> ] 47.35MB/121.9MB 17:01:47 519f42193ec8 Extracting [==============> ] 36.21MB/121.9MB 17:01:47 98acab318002 Extracting [==========================> ] 65.18MB/121.9MB 17:01:47 519f42193ec8 Extracting [=====================> ] 53.48MB/121.9MB 17:01:47 98acab318002 Extracting [==================================> ] 83.56MB/121.9MB 17:01:47 519f42193ec8 Extracting [=============================> ] 72.42MB/121.9MB 17:01:47 98acab318002 Extracting [========================================> ] 99.71MB/121.9MB 17:01:47 519f42193ec8 Extracting [====================================> ] 89.13MB/121.9MB 17:01:47 98acab318002 Extracting [===============================================> ] 114.8MB/121.9MB 17:01:47 519f42193ec8 Extracting [==========================================> ] 104.2MB/121.9MB 17:01:47 98acab318002 Extracting [=================================================> ] 119.8MB/121.9MB 17:01:47 519f42193ec8 Extracting [===============================================> ] 117MB/121.9MB 17:01:47 98acab318002 Extracting [==================================================>] 121.9MB/121.9MB 17:01:47 98acab318002 Pull complete 17:01:47 878348106a95 Extracting [==================================================>] 3.447kB/3.447kB 17:01:47 878348106a95 Extracting [==================================================>] 3.447kB/3.447kB 17:01:47 519f42193ec8 Extracting [=================================================> ] 120.3MB/121.9MB 17:01:47 519f42193ec8 Extracting [==================================================>] 121.9MB/121.9MB 17:01:48 878348106a95 Pull complete 17:01:48 519f42193ec8 Pull complete 17:01:48 5df3538dc51e Extracting [==================================================>] 3.627kB/3.627kB 17:01:48 5df3538dc51e Extracting [==================================================>] 3.627kB/3.627kB 17:01:48 zookeeper Pulled 17:01:48 5df3538dc51e Pull complete 17:01:48 kafka Pulled 17:01:48 Network compose_default Creating 17:01:48 Network compose_default Created 17:01:48 Container prometheus Creating 17:01:48 Container mariadb Creating 17:01:48 Container simulator Creating 17:01:48 Container zookeeper Creating 17:01:57 Container simulator Created 17:01:57 Container prometheus Created 17:01:57 Container grafana Creating 17:01:57 Container zookeeper Created 17:01:57 Container mariadb Created 17:01:57 Container policy-db-migrator Creating 17:01:57 Container kafka Creating 17:01:57 Container grafana Created 17:01:57 Container kafka Created 17:01:57 Container policy-db-migrator Created 17:01:57 Container policy-api Creating 17:01:57 Container policy-api Created 17:01:57 Container policy-pap Creating 17:01:57 Container policy-pap Created 17:01:57 Container policy-apex-pdp Creating 17:01:58 Container policy-apex-pdp Created 17:01:58 Container zookeeper Starting 17:01:58 Container prometheus Starting 17:01:58 Container mariadb Starting 17:01:58 Container simulator Starting 17:01:59 Container prometheus Started 17:01:59 Container grafana Starting 17:01:59 Container grafana Started 17:02:01 Container zookeeper Started 17:02:01 Container kafka Starting 17:02:01 Container kafka Started 17:02:03 Container simulator Started 17:02:04 Container mariadb Started 17:02:04 Container policy-db-migrator Starting 17:02:05 Container policy-db-migrator Started 17:02:05 Container policy-api Starting 17:02:06 Container policy-api Started 17:02:06 Container policy-pap Starting 17:02:06 Container policy-pap Started 17:02:06 Container policy-apex-pdp Starting 17:02:07 Container policy-apex-pdp Started 17:02:07 Prometheus server: http://localhost:30259 17:02:07 Grafana server: http://localhost:30269 17:02:17 Waiting for REST to come up on localhost port 30003... 17:02:17 NAMES STATUS 17:02:17 policy-apex-pdp Up 10 seconds 17:02:17 policy-pap Up 10 seconds 17:02:17 policy-api Up 11 seconds 17:02:17 kafka Up 15 seconds 17:02:17 grafana Up 17 seconds 17:02:17 zookeeper Up 16 seconds 17:02:17 simulator Up 14 seconds 17:02:17 mariadb Up 13 seconds 17:02:17 prometheus Up 18 seconds 17:02:22 NAMES STATUS 17:02:22 policy-apex-pdp Up 15 seconds 17:02:22 policy-pap Up 15 seconds 17:02:22 policy-api Up 16 seconds 17:02:22 kafka Up 20 seconds 17:02:22 grafana Up 22 seconds 17:02:22 zookeeper Up 21 seconds 17:02:22 simulator Up 19 seconds 17:02:22 mariadb Up 18 seconds 17:02:22 prometheus Up 23 seconds 17:02:27 NAMES STATUS 17:02:27 policy-apex-pdp Up 20 seconds 17:02:27 policy-pap Up 20 seconds 17:02:27 policy-api Up 21 seconds 17:02:27 kafka Up 25 seconds 17:02:27 grafana Up 27 seconds 17:02:27 zookeeper Up 26 seconds 17:02:27 simulator Up 24 seconds 17:02:27 mariadb Up 23 seconds 17:02:27 prometheus Up 28 seconds 17:02:32 NAMES STATUS 17:02:32 policy-apex-pdp Up 25 seconds 17:02:32 policy-pap Up 26 seconds 17:02:32 policy-api Up 26 seconds 17:02:32 kafka Up 30 seconds 17:02:32 grafana Up 32 seconds 17:02:32 zookeeper Up 31 seconds 17:02:32 simulator Up 29 seconds 17:02:32 mariadb Up 28 seconds 17:02:32 prometheus Up 33 seconds 17:02:37 NAMES STATUS 17:02:37 policy-apex-pdp Up 30 seconds 17:02:37 policy-pap Up 31 seconds 17:02:37 policy-api Up 31 seconds 17:02:37 kafka Up 35 seconds 17:02:37 grafana Up 37 seconds 17:02:37 zookeeper Up 36 seconds 17:02:37 simulator Up 34 seconds 17:02:37 mariadb Up 33 seconds 17:02:37 prometheus Up 38 seconds 17:02:42 NAMES STATUS 17:02:42 policy-apex-pdp Up 35 seconds 17:02:42 policy-pap Up 36 seconds 17:02:42 policy-api Up 36 seconds 17:02:42 kafka Up 40 seconds 17:02:42 grafana Up 43 seconds 17:02:42 zookeeper Up 41 seconds 17:02:42 simulator Up 39 seconds 17:02:42 mariadb Up 38 seconds 17:02:42 prometheus Up 43 seconds 17:02:43 Build docker image for robot framework 17:02:43 Error: No such image: policy-csit-robot 17:02:43 Cloning into '/w/workspace/policy-pap-newdelhi-project-csit-pap/csit/resources/tests/models'... 17:02:43 Build robot framework docker image 17:02:44 Sending build context to Docker daemon 16.14MB 17:02:44 Step 1/9 : FROM nexus3.onap.org:10001/library/python:3.10-slim-bullseye 17:02:44 3.10-slim-bullseye: Pulling from library/python 17:02:44 fa0650a893c2: Pulling fs layer 17:02:44 c11bc7b0e3f4: Pulling fs layer 17:02:44 7bbbc6da0c4e: Pulling fs layer 17:02:44 f988c113d3f9: Pulling fs layer 17:02:44 f988c113d3f9: Waiting 17:02:44 c11bc7b0e3f4: Verifying Checksum 17:02:44 c11bc7b0e3f4: Download complete 17:02:44 f988c113d3f9: Verifying Checksum 17:02:44 f988c113d3f9: Download complete 17:02:44 7bbbc6da0c4e: Verifying Checksum 17:02:44 7bbbc6da0c4e: Download complete 17:02:44 fa0650a893c2: Verifying Checksum 17:02:44 fa0650a893c2: Download complete 17:02:45 fa0650a893c2: Pull complete 17:02:46 c11bc7b0e3f4: Pull complete 17:02:46 7bbbc6da0c4e: Pull complete 17:02:46 f988c113d3f9: Pull complete 17:02:46 Digest: sha256:1305eb710cd778cee687a0f69dd04f4a506ee4e9c3b75454f82f51fae44a32f1 17:02:46 Status: Downloaded newer image for nexus3.onap.org:10001/library/python:3.10-slim-bullseye 17:02:46 ---> 22d1c3b2c9f7 17:02:46 Step 2/9 : ARG CSIT_SCRIPT=${CSIT_SCRIPT} 17:02:48 ---> Running in 8fa54d1cd7f2 17:02:49 Removing intermediate container 8fa54d1cd7f2 17:02:49 ---> 87c752f14e1d 17:02:49 Step 3/9 : ARG ROBOT_FILE=${ROBOT_FILE} 17:02:49 ---> Running in 9cca19063c08 17:02:49 Removing intermediate container 9cca19063c08 17:02:49 ---> 68c308f88fe8 17:02:49 Step 4/9 : ENV ROBOT_WORKSPACE=/opt/robotworkspace ROBOT_FILE=$ROBOT_FILE CLAMP_K8S_TEST=$CLAMP_K8S_TEST 17:02:49 ---> Running in fb08e3adc857 17:02:49 Removing intermediate container fb08e3adc857 17:02:49 ---> dd628844c4a6 17:02:49 Step 5/9 : RUN python3 -m pip -qq install --upgrade pip && python3 -m pip -qq install --upgrade --extra-index-url="https://nexus3.onap.org/repository/PyPi.staging/simple" 'robotframework-onap==0.6.0.*' --pre && python3 -m pip -qq install --upgrade confluent-kafka && python3 -m pip freeze 17:02:49 ---> Running in 73e012c27eed 17:03:01 bcrypt==4.2.0 17:03:01 certifi==2024.8.30 17:03:01 cffi==1.17.1 17:03:01 charset-normalizer==3.3.2 17:03:01 confluent-kafka==2.5.3 17:03:01 cryptography==43.0.1 17:03:01 decorator==5.1.1 17:03:01 deepdiff==8.0.1 17:03:01 dnspython==2.7.0rc1 17:03:01 future==1.0.0 17:03:01 idna==3.10 17:03:01 Jinja2==3.1.4 17:03:01 jsonpath-rw==1.4.0 17:03:01 kafka-python==2.0.2 17:03:01 MarkupSafe==2.1.5 17:03:01 more-itertools==5.0.0 17:03:01 orderly-set==5.2.2 17:03:01 paramiko==3.5.0 17:03:01 pbr==6.1.0 17:03:01 ply==3.11 17:03:01 protobuf==5.28.2 17:03:01 pycparser==2.22 17:03:01 PyNaCl==1.5.0 17:03:01 PyYAML==6.0.2 17:03:01 requests==2.32.3 17:03:01 robotframework==7.1 17:03:01 robotframework-onap==0.6.0.dev105 17:03:01 robotframework-requests==1.0a11 17:03:01 robotlibcore-temp==1.0.2 17:03:01 six==1.16.0 17:03:01 urllib3==2.2.3 17:03:04 Removing intermediate container 73e012c27eed 17:03:04 ---> 83000054eec8 17:03:04 Step 6/9 : RUN mkdir -p ${ROBOT_WORKSPACE} 17:03:04 ---> Running in fc231092eae8 17:03:05 Removing intermediate container fc231092eae8 17:03:05 ---> 30ce0ee5a0d1 17:03:05 Step 7/9 : COPY scripts/run-test.sh tests/ ${ROBOT_WORKSPACE}/ 17:03:06 ---> d84864973f9c 17:03:06 Step 8/9 : WORKDIR ${ROBOT_WORKSPACE} 17:03:06 ---> Running in f7cfb873f208 17:03:06 Removing intermediate container f7cfb873f208 17:03:06 ---> 2f2123973bb9 17:03:06 Step 9/9 : CMD ["sh", "-c", "./run-test.sh" ] 17:03:06 ---> Running in 43bd01125865 17:03:06 Removing intermediate container 43bd01125865 17:03:06 ---> ebd216ff146d 17:03:06 Successfully built ebd216ff146d 17:03:07 Successfully tagged policy-csit-robot:latest 17:03:09 top - 17:03:09 up 3 min, 0 users, load average: 3.19, 1.61, 0.64 17:03:09 Tasks: 209 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 17:03:09 %Cpu(s): 15.1 us, 3.6 sy, 0.0 ni, 76.1 id, 5.0 wa, 0.0 hi, 0.1 si, 0.1 st 17:03:09 17:03:09 total used free shared buff/cache available 17:03:09 Mem: 31G 2.9G 22G 1.3M 6.2G 28G 17:03:09 Swap: 1.0G 0B 1.0G 17:03:09 17:03:09 NAMES STATUS 17:03:09 policy-apex-pdp Up About a minute 17:03:09 policy-pap Up About a minute 17:03:09 policy-api Up About a minute 17:03:09 kafka Up About a minute 17:03:09 grafana Up About a minute 17:03:09 zookeeper Up About a minute 17:03:09 simulator Up About a minute 17:03:09 mariadb Up About a minute 17:03:09 prometheus Up About a minute 17:03:09 17:03:12 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 17:03:12 4a3877d05eaa policy-apex-pdp 0.61% 175.8MiB / 31.41GiB 0.55% 25.6kB / 38.6kB 0B / 0B 49 17:03:12 456184a7cf08 policy-pap 1.90% 706.9MiB / 31.41GiB 2.20% 107kB / 101kB 0B / 149MB 63 17:03:12 756d64d45d83 policy-api 0.11% 456.6MiB / 31.41GiB 1.42% 989kB / 673kB 0B / 0B 53 17:03:12 89c6fcbde69f kafka 3.97% 392.5MiB / 31.41GiB 1.22% 127kB / 125kB 0B / 537kB 87 17:03:12 b262e55389f5 grafana 0.04% 67.14MiB / 31.41GiB 0.21% 25kB / 4.82kB 0B / 26.7MB 18 17:03:12 39e59e5bdc2b zookeeper 0.09% 87.37MiB / 31.41GiB 0.27% 57.5kB / 52.1kB 0B / 393kB 63 17:03:12 d64973899833 simulator 0.06% 121.1MiB / 31.41GiB 0.38% 1.43kB / 0B 0B / 0B 77 17:03:12 298627baa85c mariadb 0.02% 102.7MiB / 31.41GiB 0.32% 969kB / 1.22MB 10.9MB / 71.4MB 31 17:03:12 9ee4ff3d7fc3 prometheus 0.02% 20.16MiB / 31.41GiB 0.06% 39.6kB / 2.12kB 229kB / 0B 13 17:03:12 17:03:12 Container policy-csit Creating 17:03:12 Container policy-csit Created 17:03:12 Attaching to policy-csit 17:03:13 policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot 17:03:13 policy-csit | Run Robot test 17:03:13 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 17:03:13 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 17:03:13 policy-csit | -v POLICY_API_IP:policy-api:6969 17:03:13 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 17:03:13 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 17:03:13 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 17:03:13 policy-csit | -v APEX_IP:policy-apex-pdp:6969 17:03:13 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 17:03:13 policy-csit | -v KAFKA_IP:kafka:9092 17:03:13 policy-csit | -v PROMETHEUS_IP:prometheus:9090 17:03:13 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 17:03:13 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 17:03:13 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 17:03:13 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 17:03:13 policy-csit | -v TEMP_FOLDER:/tmp/distribution 17:03:13 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 17:03:13 policy-csit | -v CLAMP_K8S_TEST: 17:03:13 policy-csit | Starting Robot test suites ... 17:03:13 policy-csit | ============================================================================== 17:03:13 policy-csit | Pap-Test & Pap-Slas 17:03:13 policy-csit | ============================================================================== 17:03:13 policy-csit | Pap-Test & Pap-Slas.Pap-Test 17:03:13 policy-csit | ============================================================================== 17:03:14 policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 17:03:14 policy-csit | ------------------------------------------------------------------------------ 17:03:14 policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 17:03:14 policy-csit | ------------------------------------------------------------------------------ 17:03:15 policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | 17:03:15 policy-csit | ------------------------------------------------------------------------------ 17:03:15 policy-csit | Healthcheck :: Verify policy pap health check | PASS | 17:03:15 policy-csit | ------------------------------------------------------------------------------ 17:03:35 policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 17:03:35 policy-csit | ------------------------------------------------------------------------------ 17:03:36 policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 17:03:36 policy-csit | ------------------------------------------------------------------------------ 17:03:36 policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 17:03:36 policy-csit | ------------------------------------------------------------------------------ 17:03:36 policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 17:03:36 policy-csit | ------------------------------------------------------------------------------ 17:03:37 policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 17:03:37 policy-csit | ------------------------------------------------------------------------------ 17:03:37 policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 17:03:37 policy-csit | ------------------------------------------------------------------------------ 17:03:37 policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 17:03:37 policy-csit | ------------------------------------------------------------------------------ 17:03:37 policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 17:03:37 policy-csit | ------------------------------------------------------------------------------ 17:03:37 policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 17:03:37 policy-csit | ------------------------------------------------------------------------------ 17:03:37 policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 17:03:37 policy-csit | ------------------------------------------------------------------------------ 17:03:38 policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 17:03:38 policy-csit | ------------------------------------------------------------------------------ 17:03:38 policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 17:03:38 policy-csit | ------------------------------------------------------------------------------ 17:03:38 policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 17:03:38 policy-csit | ------------------------------------------------------------------------------ 17:03:39 policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 17:03:39 policy-csit | ------------------------------------------------------------------------------ 17:03:39 policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 17:03:39 policy-csit | ------------------------------------------------------------------------------ 17:03:39 policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 17:03:39 policy-csit | ------------------------------------------------------------------------------ 17:03:39 policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 17:03:39 policy-csit | ------------------------------------------------------------------------------ 17:03:39 policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 17:03:39 policy-csit | ------------------------------------------------------------------------------ 17:03:39 policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | 17:03:39 policy-csit | 22 tests, 22 passed, 0 failed 17:03:39 policy-csit | ============================================================================== 17:03:39 policy-csit | Pap-Test & Pap-Slas.Pap-Slas 17:03:39 policy-csit | ============================================================================== 17:04:39 policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 17:04:39 policy-csit | ------------------------------------------------------------------------------ 17:04:39 policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 17:04:39 policy-csit | ------------------------------------------------------------------------------ 17:04:39 policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 17:04:39 policy-csit | ------------------------------------------------------------------------------ 17:04:39 policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 17:04:39 policy-csit | ------------------------------------------------------------------------------ 17:04:39 policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 17:04:39 policy-csit | ------------------------------------------------------------------------------ 17:04:39 policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 17:04:39 policy-csit | ------------------------------------------------------------------------------ 17:04:39 policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 17:04:39 policy-csit | ------------------------------------------------------------------------------ 17:04:39 policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 17:04:39 policy-csit | ------------------------------------------------------------------------------ 17:04:39 policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | 17:04:39 policy-csit | 8 tests, 8 passed, 0 failed 17:04:39 policy-csit | ============================================================================== 17:04:39 policy-csit | Pap-Test & Pap-Slas | PASS | 17:04:39 policy-csit | 30 tests, 30 passed, 0 failed 17:04:39 policy-csit | ============================================================================== 17:04:39 policy-csit | Output: /tmp/results/output.xml 17:04:39 policy-csit | Log: /tmp/results/log.html 17:04:39 policy-csit | Report: /tmp/results/report.html 17:04:39 policy-csit | RESULT: 0 17:04:39 policy-csit exited with code 0 17:04:39 NAMES STATUS 17:04:39 policy-apex-pdp Up 2 minutes 17:04:39 policy-pap Up 2 minutes 17:04:39 policy-api Up 2 minutes 17:04:39 kafka Up 2 minutes 17:04:39 grafana Up 2 minutes 17:04:39 zookeeper Up 2 minutes 17:04:39 simulator Up 2 minutes 17:04:39 mariadb Up 2 minutes 17:04:39 prometheus Up 2 minutes 17:04:39 Shut down started! 17:04:42 Collecting logs from docker compose containers... 17:04:45 ======== Logs from grafana ======== 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.963394163Z level=info msg="Starting Grafana" version=11.2.0 commit=2a88694fd3ced0335bf3726cc5d0adc2d1858855 branch=v11.2.x compiled=2024-09-29T17:01:59Z 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.963880327Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.963918157Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.963960018Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.964056638Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.964092969Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.964146669Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.96421152Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.96427436Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.964343681Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.964411241Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.964487352Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.964526372Z level=info msg=Target target=[all] 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.964629913Z level=info msg="Path Home" path=/usr/share/grafana 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.964664143Z level=info msg="Path Data" path=/var/lib/grafana 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.964704214Z level=info msg="Path Logs" path=/var/log/grafana 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.964765504Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.964799124Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 17:04:45 grafana | logger=settings t=2024-09-29T17:01:59.964854696Z level=info msg="App mode production" 17:04:45 grafana | logger=featuremgmt t=2024-09-29T17:01:59.965229449Z level=info msg=FeatureToggles kubernetesPlaylists=true cloudWatchNewLabelParsing=true autoMigrateXYChartPanel=true correlations=true formatString=true cloudWatchRoundUpEndTime=true transformationsRedesign=true panelMonitoring=true managedPluginsInstall=true prometheusMetricEncyclopedia=true alertingSimplifiedRouting=true prometheusDataplane=true alertingNoDataErrorExecution=true ssoSettingsApi=true exploreMetrics=true nestedFolders=true logsExploreTableVisualisation=true recoveryThreshold=true groupToNestedTableTransformation=true logRowsPopoverMenu=true influxdbBackendMigration=true topnav=true prometheusAzureOverrideAudience=true addFieldFromCalculationStatFunctions=true annotationPermissionUpdate=true cloudWatchCrossAccountQuerying=true lokiQuerySplitting=true lokiMetricDataplane=true awsAsyncQueryCaching=true transformationsVariableSupport=true logsContextDatasourceUi=true publicDashboards=true angularDeprecationUI=true recordedQueriesMulti=true lokiQueryHints=true lokiStructuredMetadata=true dataplaneFrontendFallback=true logsInfiniteScrolling=true tlsMemcached=true alertingInsights=true prometheusConfigOverhaulAuth=true dashgpt=true 17:04:45 grafana | logger=sqlstore t=2024-09-29T17:01:59.96538163Z level=info msg="Connecting to DB" dbtype=sqlite3 17:04:45 grafana | logger=sqlstore t=2024-09-29T17:01:59.965467051Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 17:04:45 grafana | logger=migrator t=2024-09-29T17:01:59.967229315Z level=info msg="Locking database" 17:04:45 grafana | logger=migrator t=2024-09-29T17:01:59.967308075Z level=info msg="Starting DB migrations" 17:04:45 grafana | logger=migrator t=2024-09-29T17:01:59.967919861Z level=info msg="Executing migration" id="create migration_log table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:01:59.968848969Z level=info msg="Migration successfully executed" id="create migration_log table" duration=928.908µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:01:59.973068444Z level=info msg="Executing migration" id="create user table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:01:59.973755881Z level=info msg="Migration successfully executed" id="create user table" duration=687.307µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:01:59.979317708Z level=info msg="Executing migration" id="add unique index user.login" 17:04:45 grafana | logger=migrator t=2024-09-29T17:01:59.980151264Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=833.136µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:01:59.983786315Z level=info msg="Executing migration" id="add unique index user.email" 17:04:45 grafana | logger=migrator t=2024-09-29T17:01:59.984592202Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=805.477µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:01:59.987754799Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:01:59.988531835Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=776.746µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:01:59.994156203Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:01:59.99508707Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=929.597µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:01:59.99861013Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.002629104Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.018134ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.005958112Z level=info msg="Executing migration" id="create user table v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.006878977Z level=info msg="Migration successfully executed" id="create user table v2" duration=920.325µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.012438764Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.013730582Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.291268ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.017177561Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.018452548Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.274277ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.047273596Z level=info msg="Executing migration" id="copy data_source v1 to v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.04800394Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=730.984µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.051590199Z level=info msg="Executing migration" id="Drop old table user_v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.052475135Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=885.186µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.057691024Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.05888017Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.184066ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.062018017Z level=info msg="Executing migration" id="Update user table charset" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.062154227Z level=info msg="Migration successfully executed" id="Update user table charset" duration=136.57µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.065311375Z level=info msg="Executing migration" id="Add last_seen_at column to user" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.066497572Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.185677ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.069593639Z level=info msg="Executing migration" id="Add missing user data" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.06993402Z level=info msg="Migration successfully executed" id="Add missing user data" duration=340.181µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.074932708Z level=info msg="Executing migration" id="Add is_disabled column to user" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.076192754Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.259616ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.079224831Z level=info msg="Executing migration" id="Add index user.login/user.email" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.080105817Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=880.765µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.083053093Z level=info msg="Executing migration" id="Add is_service_account column to user" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.08442281Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.368897ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.089670409Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.097780793Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.109904ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.101069251Z level=info msg="Executing migration" id="Add uid column to user" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.102317578Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.247867ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.105346565Z level=info msg="Executing migration" id="Update uid column values for users" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.105716777Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=364.772µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.108788854Z level=info msg="Executing migration" id="Add unique index user_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.109624958Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=835.384µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.114815386Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.11545755Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=641.314µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.118893148Z level=info msg="Executing migration" id="update login and email fields to lowercase" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.119593403Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=699.265µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.122933761Z level=info msg="Executing migration" id="update login and email fields to lowercase2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.123312733Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=378.292µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.126276019Z level=info msg="Executing migration" id="create temp user table v1-7" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.127183004Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=906.645µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.132384362Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.133162847Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=775.335µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.136022113Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.136866057Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=843.194µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.139883054Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.140652488Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=769.064µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.145489275Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.146297169Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=807.534µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.149489626Z level=info msg="Executing migration" id="Update temp_user table charset" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.149558237Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=69.591µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.152615444Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.153361118Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=744.224µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.158044803Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.158803318Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=758.155µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.161747224Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.162491678Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=744.074µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.165517264Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.166254098Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=736.414µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.170879114Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.17390961Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.030086ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.177054597Z level=info msg="Executing migration" id="create temp_user v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.177970943Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=915.836µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.18099073Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.181807474Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=816.794µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.186704191Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.189058054Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=2.478724ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.193063746Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.194544753Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.478577ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.197676831Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.198536206Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=861.835µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.203173001Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.203703424Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=529.583µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.206920872Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.207837876Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=916.364µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.211063554Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.211758988Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=696.264µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.214863725Z level=info msg="Executing migration" id="create star table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.215606199Z level=info msg="Migration successfully executed" id="create star table" duration=741.994µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.220986099Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.221826613Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=840.314µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.224711339Z level=info msg="Executing migration" id="create org table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.225580364Z level=info msg="Migration successfully executed" id="create org table v1" duration=869.065µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.228702891Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.229528965Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=826.024µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.232486052Z level=info msg="Executing migration" id="create org_user table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.233257676Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=770.974µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.238308593Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.239129128Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=820.405µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.242197995Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.24300452Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=806.465µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.245901735Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.24675486Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=855.515µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.249880237Z level=info msg="Executing migration" id="Update org table charset" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.249974177Z level=info msg="Migration successfully executed" id="Update org table charset" duration=90.13µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.254955545Z level=info msg="Executing migration" id="Update org_user table charset" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.255024185Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=69.16µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.258003781Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.258329623Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=324.972µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.26137285Z level=info msg="Executing migration" id="create dashboard table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.262175305Z level=info msg="Migration successfully executed" id="create dashboard table" duration=802.225µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.265080111Z level=info msg="Executing migration" id="add index dashboard.account_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.265962035Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=881.084µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.271163204Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.27240511Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.246596ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.27584232Z level=info msg="Executing migration" id="create dashboard_tag table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.276974466Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.131126ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.280243973Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.281114158Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=867.065µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.286078785Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.28690404Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=825.135µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.299803621Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.304127465Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.322744ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.306497697Z level=info msg="Executing migration" id="create dashboard v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.307110881Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=613.194µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.310903362Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.311505855Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=604.643µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.313680307Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.31427811Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=598.373µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.316797744Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.317113755Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=315.481µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.321655321Z level=info msg="Executing migration" id="drop table dashboard_v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.322281554Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=626.093µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.324797828Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.324905308Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=107.45µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.327221481Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.328527358Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.306067ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.333262344Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.334620442Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.357947ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.337260076Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.338584393Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.321087ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.341033297Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.3416487Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=615.333µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.346268436Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.347616353Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.347387ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.350030456Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.350684569Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=654.513µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.353242134Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.353874937Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=632.443µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.358187951Z level=info msg="Executing migration" id="Update dashboard table charset" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.358248371Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=60.39µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.360929196Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.360984776Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=55.9µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.36351563Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.364906627Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.390577ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.367818913Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.369269202Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.449619ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.373454604Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.374869072Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.413678ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.377576857Z level=info msg="Executing migration" id="Add column uid in dashboard" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.378975685Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.398157ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.38174728Z level=info msg="Executing migration" id="Update uid column values in dashboard" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.381972271Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=224.361µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.385959993Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.386594467Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=634.034µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.390235786Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.39081286Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=577.054µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.393669896Z level=info msg="Executing migration" id="Update dashboard title length" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.393726396Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=56.43µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.397764658Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.398429181Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=663.713µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.401274877Z level=info msg="Executing migration" id="create dashboard_provisioning" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.40180392Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=528.443µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.404753107Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.408688018Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=3.934171ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.412502268Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.413340004Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=836.676µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.416541791Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.417450386Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=908.025µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.420530603Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.421466598Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=934.415µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.4254124Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.425820452Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=407.272µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.428818598Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.429437441Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=618.023µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.43277396Z level=info msg="Executing migration" id="Add check_sum column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.435125262Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.350622ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.439092575Z level=info msg="Executing migration" id="Add index for dashboard_title" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.441059525Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.96524ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.444345104Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.444622985Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=278.241µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.449485952Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.449651513Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=165.451µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.452581019Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.453328963Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=750.544µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.456813972Z level=info msg="Executing migration" id="Add isPublic for dashboard" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.46023368Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.419629ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.465436839Z level=info msg="Executing migration" id="Add deleted for dashboard" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.46752817Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.088201ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.470524117Z level=info msg="Executing migration" id="Add index for deleted" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.471261581Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=736.784µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.522793144Z level=info msg="Executing migration" id="create data_source table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.524267771Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.473967ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.530584026Z level=info msg="Executing migration" id="add index data_source.account_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.531840063Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.254907ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.535059851Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.536292427Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.231956ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.539360785Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.540094048Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=732.533µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.545472678Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.546211212Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=734.964µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.54951477Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.558171417Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=8.656737ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.561138363Z level=info msg="Executing migration" id="create data_source table v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.561970209Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=831.106µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.567241748Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.568003961Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=763.683µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.571125478Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.571932223Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=805.965µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.575980275Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.57686274Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=883.035µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.582962903Z level=info msg="Executing migration" id="Add column with_credentials" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.586908145Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.944602ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.590528535Z level=info msg="Executing migration" id="Add secure json data column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.592785427Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.256902ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.595777674Z level=info msg="Executing migration" id="Update data_source table charset" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.595805224Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=28.33µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.601825987Z level=info msg="Executing migration" id="Update initial version to 1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.602032248Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=205.751µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.605127305Z level=info msg="Executing migration" id="Add read_only data column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.608718764Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.590439ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.611986713Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.612293264Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=306.541µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.614788418Z level=info msg="Executing migration" id="Update json_data with nulls" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.614960659Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=172.471µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.619902065Z level=info msg="Executing migration" id="Add uid column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.623103863Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.199518ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.626492782Z level=info msg="Executing migration" id="Update uid value" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.626769864Z level=info msg="Migration successfully executed" id="Update uid value" duration=276.382µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.630005001Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.630800465Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=794.764µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.636661627Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.637422812Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=761.225µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.64075677Z level=info msg="Executing migration" id="Add is_prunable column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.644616721Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=3.854011ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.648202371Z level=info msg="Executing migration" id="Add api_version column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.650504253Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.301242ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.656219745Z level=info msg="Executing migration" id="create api_key table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.657328191Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.107856ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.66078268Z level=info msg="Executing migration" id="add index api_key.account_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.661955566Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.172406ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.665263345Z level=info msg="Executing migration" id="add index api_key.key" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.666072799Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=809.064µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.671587689Z level=info msg="Executing migration" id="add index api_key.account_id_name" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.672753535Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.164066ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.676316635Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.677406341Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.089726ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.680701299Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.681477253Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=774.974µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.686737452Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.687489716Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=752.054µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.690589574Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.699377042Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.787988ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.702528239Z level=info msg="Executing migration" id="create api_key table v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.703249153Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=719.544µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.708465121Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.709611477Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.145496ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.712765455Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.71356608Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=803.376µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.718901399Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.719720663Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=818.734µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.722919591Z level=info msg="Executing migration" id="copy api_key v1 to v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.723421003Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=499.302µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.726756521Z level=info msg="Executing migration" id="Drop old table api_key_v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.727644077Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=887.906µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.733330888Z level=info msg="Executing migration" id="Update api_key table charset" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.733362538Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=32.39µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.736565526Z level=info msg="Executing migration" id="Add expires to api_key table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.740490067Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.919682ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.744002707Z level=info msg="Executing migration" id="Add service account foreign key" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.748178489Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=4.178372ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.751444097Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.751644028Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=199.271µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.757406659Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.761628093Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.220504ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.764882731Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.767484545Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.601444ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.770486311Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.771258025Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=770.684µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.778201524Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.778778747Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=577.283µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.781890904Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.783436523Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.545549ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.787141613Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.788624441Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.481118ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.794401903Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.795272407Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=870.504µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.798455365Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.799586411Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.131126ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.80306091Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.803169561Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=105.171µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.808927622Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.808953043Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=26.271µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.811897569Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.814561643Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.663674ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.81771097Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.820383465Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.673945ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.823346651Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.823409972Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=63.861µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.82855724Z level=info msg="Executing migration" id="create quota table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.829253494Z level=info msg="Migration successfully executed" id="create quota table v1" duration=696.194µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.83230176Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.833322966Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.019366ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.836597724Z level=info msg="Executing migration" id="Update quota table charset" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.836635964Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=38.2µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.842407016Z level=info msg="Executing migration" id="create plugin_setting table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.843574902Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.164506ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.846998711Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.848320998Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.321527ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.852502781Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.85774066Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=5.241189ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.861845293Z level=info msg="Executing migration" id="Update plugin_setting table charset" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.862008014Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=164.761µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.866272286Z level=info msg="Executing migration" id="create session table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.868002906Z level=info msg="Migration successfully executed" id="create session table" duration=1.72824ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.871507925Z level=info msg="Executing migration" id="Drop old table playlist table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.871695086Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=186.341µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.876198211Z level=info msg="Executing migration" id="Drop old table playlist_item table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.876368342Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=169.821µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.879827741Z level=info msg="Executing migration" id="create playlist table v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.880597425Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=768.924µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.884202275Z level=info msg="Executing migration" id="create playlist item table v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.885021039Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=818.114µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.889299273Z level=info msg="Executing migration" id="Update playlist table charset" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.889365274Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=68.711µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.892763552Z level=info msg="Executing migration" id="Update playlist_item table charset" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.892833863Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=71.381µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.896485422Z level=info msg="Executing migration" id="Add playlist column created_at" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.90157176Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.086258ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.905104339Z level=info msg="Executing migration" id="Add playlist column updated_at" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.908363897Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.259218ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.913229594Z level=info msg="Executing migration" id="drop preferences table v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.913403315Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=173.241µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.916735113Z level=info msg="Executing migration" id="drop preferences table v3" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.916895324Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=159.851µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.920166063Z level=info msg="Executing migration" id="create preferences table v3" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.921134448Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=970.375µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.925486801Z level=info msg="Executing migration" id="Update preferences table charset" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.925574451Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=87.19µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.92889463Z level=info msg="Executing migration" id="Add column team_id in preferences" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.932208468Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.311138ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.935594227Z level=info msg="Executing migration" id="Update team_id column values in preferences" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.935879338Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=284.711µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.938639493Z level=info msg="Executing migration" id="Add column week_start in preferences" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.941839511Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.199918ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.945998904Z level=info msg="Executing migration" id="Add column preferences.json_data" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.949420803Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.419819ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.981473788Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.981640799Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=167.321µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.984908748Z level=info msg="Executing migration" id="Add preferences index org_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.985791962Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=882.735µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.990310617Z level=info msg="Executing migration" id="Add preferences index user_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.991177702Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=867.595µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.9946361Z level=info msg="Executing migration" id="create alert table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:00.99638407Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.74646ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.000682843Z level=info msg="Executing migration" id="add index alert org_id & id " 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.002690875Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=2.014452ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.006234304Z level=info msg="Executing migration" id="add index alert state" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.007066933Z level=info msg="Migration successfully executed" id="add index alert state" duration=832.459µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.010188802Z level=info msg="Executing migration" id="add index alert dashboard_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.010979508Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=792.106µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.01608227Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.016740816Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=658.146µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.019906192Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.020805789Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=897.907µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.024233139Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.025485989Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.24915ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.030972675Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.040825877Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=9.852742ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.045579857Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.046288113Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=708.346µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.049576161Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.050405737Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=832.006µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.055659532Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.055968554Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=308.822µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.058414615Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.059225911Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=811.406µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.062768021Z level=info msg="Executing migration" id="create alert_notification table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.063938061Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.16713ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.069243106Z level=info msg="Executing migration" id="Add column is_default" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.072803986Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.5633ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.075867551Z level=info msg="Executing migration" id="Add column frequency" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.079545852Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.677661ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.082729759Z level=info msg="Executing migration" id="Add column send_reminder" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.086191488Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.461239ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.091503843Z level=info msg="Executing migration" id="Add column disable_resolve_message" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.095084603Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.57997ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.098175469Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.098999076Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=823.227µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.102222063Z level=info msg="Executing migration" id="Update alert table charset" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.102252574Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=31.001µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.10535551Z level=info msg="Executing migration" id="Update alert_notification table charset" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.10539063Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=36.65µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.110383391Z level=info msg="Executing migration" id="create notification_journal table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.111260329Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=876.318µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.114379665Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.115277953Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=898.068µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.118242427Z level=info msg="Executing migration" id="drop alert_notification_journal" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.119120235Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=876.368µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.124396799Z level=info msg="Executing migration" id="create alert_notification_state table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.125112916Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=716.267µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.1279591Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.129677514Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.707953ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.134795597Z level=info msg="Executing migration" id="Add for to alert table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.139059443Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.263606ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.143252318Z level=info msg="Executing migration" id="Add column uid in alert_notification" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.149750552Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=6.499844ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.157165335Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.157535528Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=376.903µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.165401064Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.166463873Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.062579ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.169470739Z level=info msg="Executing migration" id="Remove unique index org_id_name" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.170456277Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=975.998µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.175742322Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.179834246Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.092444ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.183268904Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.183349945Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=80.181µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.186410412Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.187094197Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=684.525µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.18985145Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.190487195Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=635.435µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.195269106Z level=info msg="Executing migration" id="Drop old annotation table v4" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.195369767Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=96.751µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.198369172Z level=info msg="Executing migration" id="create annotation table v5" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.19940952Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.035728ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.203602136Z level=info msg="Executing migration" id="add index annotation 0 v3" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.204546754Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=943.868µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.210736925Z level=info msg="Executing migration" id="add index annotation 1 v3" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.211638704Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=900.339µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.215134673Z level=info msg="Executing migration" id="add index annotation 2 v3" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.216139051Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.007828ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.219047505Z level=info msg="Executing migration" id="add index annotation 3 v3" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.220046004Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=998.149µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.225154836Z level=info msg="Executing migration" id="add index annotation 4 v3" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.226327557Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.171551ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.229522544Z level=info msg="Executing migration" id="Update annotation table charset" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.229568754Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=47.88µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.233283425Z level=info msg="Executing migration" id="Add column region_id to annotation table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.237217019Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.929244ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.242991857Z level=info msg="Executing migration" id="Drop category_id index" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.243971155Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=975.548µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.246768539Z level=info msg="Executing migration" id="Add column tags to annotation table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.250721283Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.952333ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.253434894Z level=info msg="Executing migration" id="Create annotation_tag table v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.25408104Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=646.106µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.259200534Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.260111011Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=910.057µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.263055936Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.263862822Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=806.076µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.267078189Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.281911775Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=14.837125ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.287177359Z level=info msg="Executing migration" id="Create annotation_tag table v3" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.288133717Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=955.868µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.292469203Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.293736904Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.268991ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.29809241Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.298412703Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=319.913µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.303465705Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.30400617Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=540.285µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.306340389Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.306530111Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=189.482µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.309206813Z level=info msg="Executing migration" id="Add created time to annotation table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.312854525Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=3.647602ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.318756114Z level=info msg="Executing migration" id="Add updated time to annotation table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.322119793Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.340388ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.325011317Z level=info msg="Executing migration" id="Add index for created in annotation table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.326811112Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.804445ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.330851346Z level=info msg="Executing migration" id="Add index for updated in annotation table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.332313238Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.462012ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.33728719Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.337517712Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=233.023µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.340975561Z level=info msg="Executing migration" id="Add epoch_end column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.345185206Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.209075ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.348444884Z level=info msg="Executing migration" id="Add index for epoch_end" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.349300471Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=854.928µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.353317575Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.353497196Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=177.331µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.356726243Z level=info msg="Executing migration" id="Move region to single row" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.357102566Z level=info msg="Migration successfully executed" id="Move region to single row" duration=376.733µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.361449313Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.36227835Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=828.857µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.365500297Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.366313494Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=812.857µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.370729351Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.372141683Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.411482ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.375831694Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.377179556Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.347822ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.381739313Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.382553681Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=814.038µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.385886699Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.386817177Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=927.258µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.390870981Z level=info msg="Executing migration" id="Increase tags column to length 4096" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.391055083Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=189.351µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.395463479Z level=info msg="Executing migration" id="create test_data table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.396252335Z level=info msg="Migration successfully executed" id="create test_data table" duration=788.186µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.431017028Z level=info msg="Executing migration" id="create dashboard_version table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.432280039Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.263211ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.436027191Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.437390902Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.363641ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.442110622Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.442969498Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=858.166µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.446174745Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.446349087Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=174.492µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.449639465Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.45019529Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=555.625µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.455569495Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.455668036Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=99.261µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.459354427Z level=info msg="Executing migration" id="create team table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.460484306Z level=info msg="Migration successfully executed" id="create team table" duration=1.129259ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.464301778Z level=info msg="Executing migration" id="add index team.org_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.46568221Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.379972ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.470164938Z level=info msg="Executing migration" id="add unique index team_org_id_name" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.471616129Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.450912ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.475356741Z level=info msg="Executing migration" id="Add column uid in team" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.481844736Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=6.488705ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.485206364Z level=info msg="Executing migration" id="Update uid column values in team" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.485380295Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=171.981µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.488617232Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.489474Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=857.458µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.493852357Z level=info msg="Executing migration" id="create team member table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.494589353Z level=info msg="Migration successfully executed" id="create team member table" duration=737.076µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.498294254Z level=info msg="Executing migration" id="add index team_member.org_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.499605005Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.310221ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.503230106Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.504634887Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.403911ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.508705202Z level=info msg="Executing migration" id="add index team_member.team_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.509617919Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=911.637µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.512948857Z level=info msg="Executing migration" id="Add column email to team table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.517424185Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.472328ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.520809303Z level=info msg="Executing migration" id="Add column external to team_member table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.525332012Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.520379ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.529995691Z level=info msg="Executing migration" id="Add column permission to team_member table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.534469708Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.475467ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.537597095Z level=info msg="Executing migration" id="create dashboard acl table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.538492332Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=895.157µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.54189169Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.542789978Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=892.618µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.547198445Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.548133543Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=934.468µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.551603122Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.552892853Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.285351ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.556931187Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.558394909Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.463512ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.562959908Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.563833075Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=872.907µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.567300535Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.568192842Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=891.557µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.57157506Z level=info msg="Executing migration" id="add index dashboard_permission" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.572479848Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=903.788µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.576869415Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.577389809Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=520.434µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.58115201Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.581538435Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=386.565µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.587814987Z level=info msg="Executing migration" id="create tag table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.588918376Z level=info msg="Migration successfully executed" id="create tag table" duration=1.105349ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.592621428Z level=info msg="Executing migration" id="add index tag.key_value" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.593547775Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=925.807µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.597170055Z level=info msg="Executing migration" id="create login attempt table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.597882231Z level=info msg="Migration successfully executed" id="create login attempt table" duration=711.886µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.602113887Z level=info msg="Executing migration" id="add index login_attempt.username" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.603050505Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=952.258µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.606748717Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.607680114Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=939.068µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.611523976Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.626540143Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=15.016377ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.630901219Z level=info msg="Executing migration" id="create login_attempt v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.631613815Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=712.496µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.635188966Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.636090773Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=902.597µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.639948175Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.64040505Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=457.225µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.645157239Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.646068077Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=911.558µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.650297412Z level=info msg="Executing migration" id="create user auth table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.651451723Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.15463ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.655205264Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.656327653Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.121309ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.661053213Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.661114293Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=62.53µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.664673383Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.672695771Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.023028ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.67620884Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.681267333Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.058343ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.685736261Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.690948535Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.211774ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.694783196Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.699873999Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.090563ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.703312958Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.704302276Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=988.628µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.707637935Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.712770367Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.132012ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.716845352Z level=info msg="Executing migration" id="create server_lock table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.717598528Z level=info msg="Migration successfully executed" id="create server_lock table" duration=749.946µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.720947316Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.721896805Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=948.109µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.727948076Z level=info msg="Executing migration" id="create user auth token table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.729344817Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.396491ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.733117389Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.734593421Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.475542ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.738203132Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.739635444Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.431782ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.743138553Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.744100262Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=961.279µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.748357137Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.755333176Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=6.977689ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.758762535Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.759702702Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=939.167µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.763915878Z level=info msg="Executing migration" id="create cache_data table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.764780345Z level=info msg="Migration successfully executed" id="create cache_data table" duration=864.527µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.768071273Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.769310384Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.303022ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.772820083Z level=info msg="Executing migration" id="create short_url table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.774231564Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.410241ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.778791403Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.780082024Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.28976ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.783640204Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.783732665Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=93.081µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.786316177Z level=info msg="Executing migration" id="delete alert_definition table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.786400687Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=85.91µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.789789816Z level=info msg="Executing migration" id="recreate alert_definition table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.791292018Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.501602ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.795582454Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.796578762Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=995.448µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.800402225Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.801372043Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=969.418µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.805314046Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.805382167Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=68.471µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.80936058Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.810354668Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=993.928µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.813482344Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.814935277Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.451973ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.818417546Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.819927369Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.508333ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.824086884Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.825077603Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=987.329µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.830049664Z level=info msg="Executing migration" id="Add column paused in alert_definition" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.836407268Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.357684ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.839545304Z level=info msg="Executing migration" id="drop alert_definition table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.840482542Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=938.458µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.888584356Z level=info msg="Executing migration" id="delete alert_definition_version table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.888718337Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=134.251µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.892084495Z level=info msg="Executing migration" id="recreate alert_definition_version table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.893704409Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.619514ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.898029886Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.899218486Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.18722ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.902508223Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.903476222Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=967.619µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.9067904Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.90685429Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=63.7µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.910932354Z level=info msg="Executing migration" id="drop alert_definition_version table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.911793412Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=857.728µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.915422712Z level=info msg="Executing migration" id="create alert_instance table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.916972285Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.547873ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.920318274Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.921411053Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.092249ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.927237402Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.928423781Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.186369ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.931639239Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.937307217Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.667098ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.940951737Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.942253298Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.3018ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.94728657Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.948303669Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.017179ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.951621917Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.978681184Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=27.059347ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:01.98299609Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.005747412Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=22.750712ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.00912994Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.01034009Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.20936ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.013845148Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.014872676Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.027758ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.01893601Z level=info msg="Executing migration" id="add current_reason column related to current_state" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.024629967Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.693147ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.028119695Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.033901282Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.791737ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.037501361Z level=info msg="Executing migration" id="create alert_rule table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.038549931Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.04617ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.042896555Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.043934265Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.037259ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.047291572Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.04837472Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.082518ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.051747408Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.052868867Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.120969ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.057118012Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.057187182Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=69.1µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.060567741Z level=info msg="Executing migration" id="add column for to alert_rule" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.069425023Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=8.857652ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.072905941Z level=info msg="Executing migration" id="add column annotations to alert_rule" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.079035261Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.12671ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.083123915Z level=info msg="Executing migration" id="add column labels to alert_rule" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.089083003Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.958488ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.092423421Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.093391198Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=967.377µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.096979168Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.098071557Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.091659ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.10223636Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.10825497Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.01794ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.111744739Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.117692627Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.947128ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.121178076Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.122281785Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.102549ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.126686481Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.132674529Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=5.987858ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.135748874Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.14009448Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.345426ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.143629729Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.14370048Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=72.691µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.147907875Z level=info msg="Executing migration" id="create alert_rule_version table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.149368956Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.460251ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.153366709Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.155115783Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.748204ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.158915584Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.160088014Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.17109ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.164676402Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.164967934Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=291.932µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.168905286Z level=info msg="Executing migration" id="add column for to alert_rule_version" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.176568599Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=7.664483ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.180217878Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.186427659Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.206931ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.191055497Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.197225717Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.16993ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.200783017Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.207020078Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.236461ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.210601807Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.215158394Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.556147ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.2196108Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.219677001Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=66.171µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.222740147Z level=info msg="Executing migration" id=create_alert_configuration_table 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.223555623Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=814.776µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.227285383Z level=info msg="Executing migration" id="Add column default in alert_configuration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.235557671Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=8.272378ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.240083098Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.240152949Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=65.731µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.243482736Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.249671066Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.18785ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.252985863Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.253980562Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=993.359µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.257894423Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.265343105Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=7.448532ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.269560769Z level=info msg="Executing migration" id=create_ngalert_configuration_table 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.270184254Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=623.115µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.273623222Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.274378188Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=755.126µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.279619731Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.287895749Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=8.275908ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.292404905Z level=info msg="Executing migration" id="create provenance_type table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.293321873Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=921.248µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.308182885Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.3100278Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.843995ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.313545468Z level=info msg="Executing migration" id="create alert_image table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.314449906Z level=info msg="Migration successfully executed" id="create alert_image table" duration=904.037µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.318652191Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.319695609Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.043059ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.323168437Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.323235827Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=68.21µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.326666876Z level=info msg="Executing migration" id=create_alert_configuration_history_table 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.327641804Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=974.638µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.332130421Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.333721203Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.555732ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.337200232Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.337654666Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.342047521Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.342644116Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=595.655µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.34678537Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.348466834Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.681374ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.352084844Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.358899009Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.814605ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.362175876Z level=info msg="Executing migration" id="create library_element table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.363282525Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.106189ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.36755165Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.36873846Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.18612ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.372061267Z level=info msg="Executing migration" id="create library_element_connection table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.373041065Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=981.868µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.376524603Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.377626133Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.10083ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.381791046Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.382906156Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.11486ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.386002141Z level=info msg="Executing migration" id="increase max description length to 2048" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.386031011Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=29.2µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.389495059Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.389762161Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=266.302µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.393559112Z level=info msg="Executing migration" id="add library_element folder uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.401559028Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=8.000166ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.405751843Z level=info msg="Executing migration" id="populate library_element folder_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.406249376Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=496.903µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.409627434Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.410902365Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.273921ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.414557194Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.414933927Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=376.403µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.419347104Z level=info msg="Executing migration" id="create data_keys table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.420481623Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.134179ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.423783999Z level=info msg="Executing migration" id="create secrets table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.424731358Z level=info msg="Migration successfully executed" id="create secrets table" duration=946.579µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.430517515Z level=info msg="Executing migration" id="rename data_keys name column to id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.463553824Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=32.994639ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.467101594Z level=info msg="Executing migration" id="add name column into data_keys" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.472350967Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.249353ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.476885964Z level=info msg="Executing migration" id="copy data_keys id column values into name" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.477160536Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=273.952µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.480770446Z level=info msg="Executing migration" id="rename data_keys name column to label" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.51317021Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=32.399524ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.51679596Z level=info msg="Executing migration" id="rename data_keys id column back to name" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.547041307Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=30.246257ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.551810477Z level=info msg="Executing migration" id="create kv_store table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.552561142Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=749.675µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.556245333Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.557481023Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.23472ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.565033605Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.565508388Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=484.553µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.570718501Z level=info msg="Executing migration" id="create permission table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.572223493Z level=info msg="Migration successfully executed" id="create permission table" duration=1.504172ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.575332379Z level=info msg="Executing migration" id="add unique index permission.role_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.576606789Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.2707ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.579947087Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.581076785Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.143369ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.584239772Z level=info msg="Executing migration" id="create role table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.585147289Z level=info msg="Migration successfully executed" id="create role table" duration=907.277µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.589651095Z level=info msg="Executing migration" id="add column display_name" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.601228311Z level=info msg="Migration successfully executed" id="add column display_name" duration=11.575556ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.604432687Z level=info msg="Executing migration" id="add column group_name" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.610007633Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.574056ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.613018447Z level=info msg="Executing migration" id="add index role.org_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.613751603Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=733.176µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.618438431Z level=info msg="Executing migration" id="add unique index role_org_id_name" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.61947524Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.036589ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.622655566Z level=info msg="Executing migration" id="add index role_org_id_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.623667944Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.011638ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.627140312Z level=info msg="Executing migration" id="create team role table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.628241931Z level=info msg="Migration successfully executed" id="create team role table" duration=1.101009ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.633160881Z level=info msg="Executing migration" id="add index team_role.org_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.63426357Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.101209ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.637268025Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.638389894Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.122899ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.641725532Z level=info msg="Executing migration" id="add index team_role.team_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.64285373Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.129778ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.64766797Z level=info msg="Executing migration" id="create user role table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.648471536Z level=info msg="Migration successfully executed" id="create user role table" duration=803.206µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.653521818Z level=info msg="Executing migration" id="add index user_role.org_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.655864467Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=2.342049ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.659624608Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.661418723Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.793085ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.666572515Z level=info msg="Executing migration" id="add index user_role.user_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.667658613Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.085878ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.671380634Z level=info msg="Executing migration" id="create builtin role table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.672206531Z level=info msg="Migration successfully executed" id="create builtin role table" duration=824.867µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.675531948Z level=info msg="Executing migration" id="add index builtin_role.role_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.676628147Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.094629ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.681300435Z level=info msg="Executing migration" id="add index builtin_role.name" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.682677367Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.377721ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.685770982Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.693441575Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.673053ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.696407339Z level=info msg="Executing migration" id="add index builtin_role.org_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.697165405Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=757.626µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.70151492Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.702311757Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=794.877µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.705290841Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.707073006Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.782124ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.738259221Z level=info msg="Executing migration" id="add unique index role.uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.740577389Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=2.318458ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.746511388Z level=info msg="Executing migration" id="create seed assignment table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.747278235Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=767.457µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.750600682Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.75161938Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.018528ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.754651775Z level=info msg="Executing migration" id="add column hidden to role table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.762382638Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.730923ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.767096157Z level=info msg="Executing migration" id="permission kind migration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.773484099Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.386402ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.776298822Z level=info msg="Executing migration" id="permission attribute migration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.784018655Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=7.719933ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.786929209Z level=info msg="Executing migration" id="permission identifier migration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.794788923Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.861044ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.799505741Z level=info msg="Executing migration" id="add permission identifier index" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.800788172Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.297891ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.804004158Z level=info msg="Executing migration" id="add permission action scope role_id index" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.805110007Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.105249ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.808289674Z level=info msg="Executing migration" id="remove permission role_id action scope index" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.809295872Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.005908ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.813877669Z level=info msg="Executing migration" id="create query_history table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.815159309Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.27946ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.819914089Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.821605552Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.690943ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.824672647Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.824717537Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=45.15µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.829398156Z level=info msg="Executing migration" id="create query_history_details table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.82996702Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=568.544µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.833179926Z level=info msg="Executing migration" id="rbac disabled migrator" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.833230397Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=51.771µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.836509114Z level=info msg="Executing migration" id="teams permissions migration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.83718122Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=671.676µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.840486187Z level=info msg="Executing migration" id="dashboard permissions" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.841613876Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.126579ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.846480935Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.847446744Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=965.509µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.8506483Z level=info msg="Executing migration" id="drop managed folder create actions" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.850851272Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=203.632µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.854032617Z level=info msg="Executing migration" id="alerting notification permissions" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.854509351Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=476.244µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.857572546Z level=info msg="Executing migration" id="create query_history_star table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.858904428Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.328192ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.86540586Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.867702539Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=2.293129ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.871131667Z level=info msg="Executing migration" id="add column org_id in query_history_star" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.880533034Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=9.402567ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.883509288Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.883559258Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=50.42µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.888003295Z level=info msg="Executing migration" id="create correlation table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.889018723Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.014818ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.892197069Z level=info msg="Executing migration" id="add index correlations.uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.893635851Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.420752ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.898753223Z level=info msg="Executing migration" id="add index correlations.source_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.900476077Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.721424ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.903588873Z level=info msg="Executing migration" id="add correlation config column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.909509261Z level=info msg="Migration successfully executed" id="add correlation config column" duration=5.919168ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.912962089Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.913988757Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.025908ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.918648096Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.921136816Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.4684ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.924648225Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.944605808Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=19.958063ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.947755833Z level=info msg="Executing migration" id="create correlation v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.948636261Z level=info msg="Migration successfully executed" id="create correlation v2" duration=878.408µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.952964076Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.953729612Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=764.976µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.956566225Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.957346012Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=779.297µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.960196596Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.961289734Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.092648ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.965747801Z level=info msg="Executing migration" id="copy correlation v1 to v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.965972083Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=224.672µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.968860986Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.970016075Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.154799ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.97310773Z level=info msg="Executing migration" id="add provisioning column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.983781968Z level=info msg="Migration successfully executed" id="add provisioning column" duration=10.674238ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.988600738Z level=info msg="Executing migration" id="create entity_events table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.989366804Z level=info msg="Migration successfully executed" id="create entity_events table" duration=765.396µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.992691551Z level=info msg="Executing migration" id="create dashboard public config v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.993510658Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=819.057µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.996302731Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:02.996747014Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.000931878Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.001355471Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.004188725Z level=info msg="Executing migration" id="Drop old dashboard public config table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.004949462Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=760.807µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.007802146Z level=info msg="Executing migration" id="recreate dashboard public config v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.008826604Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.024228ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.012962649Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.01430498Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.341401ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.017298545Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.018874959Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.575184ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.023500047Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.024510096Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.009399ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.029026073Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.030056002Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.023379ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.033163298Z level=info msg="Executing migration" id="Drop public config table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.034039445Z level=info msg="Migration successfully executed" id="Drop public config table" duration=874.517µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.038721875Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.040927433Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=2.207518ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.044529783Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.045664472Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.134139ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.049681857Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.050774845Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.092328ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.057229989Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.058551811Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.320822ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.062181921Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.087491543Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=25.310312ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.091927931Z level=info msg="Executing migration" id="add annotations_enabled column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.098543446Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.614975ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.102577471Z level=info msg="Executing migration" id="add time_selection_enabled column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.111083142Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.505571ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.114485341Z level=info msg="Executing migration" id="delete orphaned public dashboards" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.114787243Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=302.033µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.119026828Z level=info msg="Executing migration" id="add share column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.127042635Z level=info msg="Migration successfully executed" id="add share column" duration=8.016277ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.130314533Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.130435325Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=120.972µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.133146817Z level=info msg="Executing migration" id="create file table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.133833893Z level=info msg="Migration successfully executed" id="create file table" duration=686.206µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.138171709Z level=info msg="Executing migration" id="file table idx: path natural pk" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.140176436Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=2.004207ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.149594215Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.151236259Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.625134ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.154398125Z level=info msg="Executing migration" id="create file_meta table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.155390873Z level=info msg="Migration successfully executed" id="create file_meta table" duration=993.098µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.185450696Z level=info msg="Executing migration" id="file table idx: path key" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.188239479Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=2.787893ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.191550797Z level=info msg="Executing migration" id="set path collation in file table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.191656248Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=106.291µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.194124948Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.194190589Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=69.181µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.197024813Z level=info msg="Executing migration" id="managed permissions migration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.197683829Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=658.416µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.201722643Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.201944324Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=218.721µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.204897849Z level=info msg="Executing migration" id="RBAC action name migrator" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.206272221Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.374622ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.209063085Z level=info msg="Executing migration" id="Add UID column to playlist" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.218319332Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.255837ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.221600809Z level=info msg="Executing migration" id="Update uid column values in playlist" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.221758891Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=157.782µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.225657164Z level=info msg="Executing migration" id="Add index for uid in playlist" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.226802273Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.144299ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.229901709Z level=info msg="Executing migration" id="update group index for alert rules" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.230340493Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=439.414µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.23356689Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.233776551Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=207.531µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.236933289Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.237418363Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=484.784µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.241461146Z level=info msg="Executing migration" id="add action column to seed_assignment" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.252878942Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.415366ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.256110769Z level=info msg="Executing migration" id="add scope column to seed_assignment" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.266047593Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.932824ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.269567732Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.270396318Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=825.566µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.274419193Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.352409488Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=77.984185ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.356438031Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.357535581Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.09811ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.361382293Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.362666923Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.28371ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.368382481Z level=info msg="Executing migration" id="add primary key to seed_assigment" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.39677965Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=28.397569ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.400620282Z level=info msg="Executing migration" id="add origin column to seed_assignment" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.40867147Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=8.049998ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.413037526Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.413340099Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=302.493µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.416549616Z level=info msg="Executing migration" id="prevent seeding OnCall access" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.416711217Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=161.991µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.420491579Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.42069878Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=207.041µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.424157299Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.424352531Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=195.172µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.428135803Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.428425216Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=289.123µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.431916485Z level=info msg="Executing migration" id="create folder table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.432911654Z level=info msg="Migration successfully executed" id="create folder table" duration=994.278µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.436380012Z level=info msg="Executing migration" id="Add index for parent_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.437610832Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.22787ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.441938459Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.443078728Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.139529ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.446751069Z level=info msg="Executing migration" id="Update folder title length" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.446778419Z level=info msg="Migration successfully executed" id="Update folder title length" duration=27.52µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.450317199Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.452316395Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.998446ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.457213627Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.459069783Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.856916ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.463801273Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.464821801Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.023808ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.468629963Z level=info msg="Executing migration" id="Sync dashboard and folder table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.469061676Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=430.813µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.472342285Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.472624277Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=268.662µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.47656115Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.477654568Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.093518ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.481039137Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.481923824Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=884.357µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.48498194Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.485837777Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=856.247µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.489925792Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.491166092Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.23992ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.49447662Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.49560084Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.12439ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.498837516Z level=info msg="Executing migration" id="create anon_device table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.499646233Z level=info msg="Migration successfully executed" id="create anon_device table" duration=808.227µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.504203982Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.505138879Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=933.927µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.508383406Z level=info msg="Executing migration" id="add index anon_device.updated_at" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.509285544Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=901.638µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.513226937Z level=info msg="Executing migration" id="create signing_key table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.513960383Z level=info msg="Migration successfully executed" id="create signing_key table" duration=733.106µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.518681813Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.519618311Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=936.047µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.522643197Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.523907787Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.27103ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.527702878Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.528065441Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=363.613µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.530704853Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.539907502Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.202389ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.543007478Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.543553572Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=546.604µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.546827059Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.546843099Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=16.61µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.550667712Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.552478207Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.810745ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.555627553Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.555654333Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=28.42µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.558505007Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.559802968Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.297011ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.563834362Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.565134803Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.299381ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.568433601Z level=info msg="Executing migration" id="create sso_setting table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.56958378Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.149479ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.572423534Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.573342052Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=924.988µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.577290975Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.577680588Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=390.413µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.580764794Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.581668992Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=902.758µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.585811506Z level=info msg="Executing migration" id="create cloud_migration table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.58731973Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.498073ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.590595287Z level=info msg="Executing migration" id="create cloud_migration_run table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.591566785Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=973.348µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.595493828Z level=info msg="Executing migration" id="add stack_id column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.604790646Z level=info msg="Migration successfully executed" id="add stack_id column" duration=9.286008ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.636415591Z level=info msg="Executing migration" id="add region_slug column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.646189074Z level=info msg="Migration successfully executed" id="add region_slug column" duration=9.774433ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.650055716Z level=info msg="Executing migration" id="add cluster_slug column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.656699701Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=6.643765ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.660949027Z level=info msg="Executing migration" id="add migration uid column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.670514497Z level=info msg="Migration successfully executed" id="add migration uid column" duration=9.56585ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.673946107Z level=info msg="Executing migration" id="Update uid column values for migration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.674257549Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=311.062µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.677521706Z level=info msg="Executing migration" id="Add unique index migration_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.678827737Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.305241ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.683233244Z level=info msg="Executing migration" id="add migration run uid column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.695738029Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=12.516205ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.699654033Z level=info msg="Executing migration" id="Update uid column values for migration run" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.699868994Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=214.271µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.703215482Z level=info msg="Executing migration" id="Add unique index migration_run_uid" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.704465432Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.24939ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.70897365Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.736137989Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=27.165009ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.739675148Z level=info msg="Executing migration" id="create cloud_migration_session v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.740439844Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=764.966µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.744109746Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.745386386Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.2764ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.749793063Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.750247077Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=453.714µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.753897017Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.754855315Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=957.968µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.758483376Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.782559448Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=24.069032ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.787171017Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.788185915Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=1.014438ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.791978697Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.793252978Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.273521ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.796676596Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.797087471Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=410.534µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.803212162Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.804774504Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=1.554722ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.809380193Z level=info msg="Executing migration" id="add snapshot upload_url column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.821023421Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=11.641158ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.824319428Z level=info msg="Executing migration" id="add snapshot status column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.831830192Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=7.509654ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.836093417Z level=info msg="Executing migration" id="add snapshot local_directory column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.845799739Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=9.706412ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.848846264Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.856516199Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=7.668905ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.860075969Z level=info msg="Executing migration" id="add snapshot encryption_key column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.869436477Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=9.359898ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.873774294Z level=info msg="Executing migration" id="add snapshot error_string column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.881926422Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=8.150118ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.8852928Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.886338679Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=1.045069ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.889776399Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.926708218Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=36.923679ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.931116985Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.931251416Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=67.13µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.934822836Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.944207425Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.383379ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.947541733Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.954635923Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=7.093421ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.95909121Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.959532043Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=441.423µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.962925312Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.963256635Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=331.353µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.966812545Z level=info msg="Executing migration" id="add record column to alert_rule table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.979445301Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=12.632346ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.982701068Z level=info msg="Executing migration" id="add record column to alert_rule_version table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.992723972Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=10.022114ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:03.997009448Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:04.007132723Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=10.122275ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:04.010535594Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:04.018131158Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=7.593974ms 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:04.022407695Z level=info msg="Executing migration" id="Enable traceQL streaming for all Tempo datasources" 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:04.022430476Z level=info msg="Migration successfully executed" id="Enable traceQL streaming for all Tempo datasources" duration=23.851µs 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:04.028097145Z level=info msg="migrations completed" performed=594 skipped=0 duration=4.060204364s 17:04:45 grafana | logger=migrator t=2024-09-29T17:02:04.029460306Z level=info msg="Unlocking database" 17:04:45 grafana | logger=sqlstore t=2024-09-29T17:02:04.046861118Z level=info msg="Created default admin" user=admin 17:04:45 grafana | logger=sqlstore t=2024-09-29T17:02:04.04716005Z level=info msg="Created default organization" 17:04:45 grafana | logger=secrets t=2024-09-29T17:02:04.051404657Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 17:04:45 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-09-29T17:02:04.119401537Z level=info msg="Restored cache from database" duration=526.865µs 17:04:45 grafana | logger=plugin.store t=2024-09-29T17:02:04.12100222Z level=info msg="Loading plugins..." 17:04:45 grafana | logger=plugins.registration t=2024-09-29T17:02:04.163012734Z level=error msg="Could not register plugin" pluginId=xychart error="plugin xychart is already registered" 17:04:45 grafana | logger=plugins.initialization t=2024-09-29T17:02:04.163047835Z level=error msg="Could not initialize plugin" pluginId=xychart error="plugin xychart is already registered" 17:04:45 grafana | logger=local.finder t=2024-09-29T17:02:04.163117085Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 17:04:45 grafana | logger=plugin.store t=2024-09-29T17:02:04.163127555Z level=info msg="Plugins loaded" count=54 duration=42.125985ms 17:04:45 grafana | logger=query_data t=2024-09-29T17:02:04.16819463Z level=info msg="Query Service initialization" 17:04:45 grafana | logger=live.push_http t=2024-09-29T17:02:04.171283877Z level=info msg="Live Push Gateway initialization" 17:04:45 grafana | logger=ngalert.notifier.alertmanager org=1 t=2024-09-29T17:02:04.175877457Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 17:04:45 grafana | logger=ngalert.state.manager t=2024-09-29T17:02:04.182324412Z level=info msg="Running in alternative execution of Error/NoData mode" 17:04:45 grafana | logger=infra.usagestats.collector t=2024-09-29T17:02:04.18436604Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 17:04:45 grafana | logger=provisioning.datasources t=2024-09-29T17:02:04.186146625Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 17:04:45 grafana | logger=provisioning.alerting t=2024-09-29T17:02:04.204963699Z level=info msg="starting to provision alerting" 17:04:45 grafana | logger=provisioning.alerting t=2024-09-29T17:02:04.204992189Z level=info msg="finished to provision alerting" 17:04:45 grafana | logger=ngalert.state.manager t=2024-09-29T17:02:04.20511446Z level=info msg="Warming state cache for startup" 17:04:45 grafana | logger=grafanaStorageLogger t=2024-09-29T17:02:04.205420232Z level=info msg="Storage starting" 17:04:45 grafana | logger=ngalert.state.manager t=2024-09-29T17:02:04.205846386Z level=info msg="State cache has been initialized" states=0 duration=732.116µs 17:04:45 grafana | logger=ngalert.multiorg.alertmanager t=2024-09-29T17:02:04.205894586Z level=info msg="Starting MultiOrg Alertmanager" 17:04:45 grafana | logger=ngalert.scheduler t=2024-09-29T17:02:04.205918547Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 17:04:45 grafana | logger=ticker t=2024-09-29T17:02:04.206073679Z level=info msg=starting first_tick=2024-09-29T17:02:10Z 17:04:45 grafana | logger=http.server t=2024-09-29T17:02:04.207721753Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 17:04:45 grafana | logger=sqlstore.transactions t=2024-09-29T17:02:04.240965591Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 17:04:45 grafana | logger=plugins.update.checker t=2024-09-29T17:02:04.281196569Z level=info msg="Update check succeeded" duration=74.128392ms 17:04:45 grafana | logger=provisioning.dashboard t=2024-09-29T17:02:04.289185219Z level=info msg="starting to provision dashboards" 17:04:45 grafana | logger=grafana.update.checker t=2024-09-29T17:02:04.295243741Z level=info msg="Update check succeeded" duration=88.832019ms 17:04:45 grafana | logger=sqlstore.transactions t=2024-09-29T17:02:04.421042682Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 17:04:45 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-09-29T17:02:04.432395931Z level=info msg="Patterns update finished" duration=68.825136ms 17:04:45 grafana | logger=grafana-apiserver t=2024-09-29T17:02:04.604459234Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 17:04:45 grafana | logger=grafana-apiserver t=2024-09-29T17:02:04.604952498Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 17:04:45 grafana | logger=provisioning.dashboard t=2024-09-29T17:02:04.642919137Z level=info msg="finished to provision dashboards" 17:04:45 grafana | logger=infra.usagestats t=2024-09-29T17:03:30.219596338Z level=info msg="Usage stats are ready to report" 17:04:45 =================================== 17:04:45 ======== Logs from kafka ======== 17:04:45 kafka | ===> User 17:04:45 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 17:04:45 kafka | ===> Configuring ... 17:04:45 kafka | Running in Zookeeper mode... 17:04:45 kafka | ===> Running preflight checks ... 17:04:45 kafka | ===> Check if /var/lib/kafka/data is writable ... 17:04:45 kafka | ===> Check if Zookeeper is healthy ... 17:04:45 kafka | [2024-09-29 17:02:06,258] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,258] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,258] INFO Client environment:java.version=17.0.12 (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,258] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,258] INFO Client environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,258] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/kafka-raft-7.7.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/utility-belt-7.7.1-30.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/common-utils-7.7.1.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/jackson-core-2.16.0.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/kafka_2.13-7.7.1-ccs.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.16.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.7.1-ccs.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-databind-2.16.0.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-7.7.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.16.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.7.1-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.16.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-server-common-7.7.1-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.16.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jackson-annotations-2.16.0.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.6-4.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.7.1-ccs.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.7.1-ccs.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.7.1-ccs.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.7.1.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar:/usr/share/java/cp-base-new/kafka-metadata-7.7.1-ccs.jar (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,259] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,259] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,259] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,259] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,259] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,259] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,259] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,259] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,259] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,259] INFO Client environment:os.memory.free=500MB (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,259] INFO Client environment:os.memory.max=8044MB (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,259] INFO Client environment:os.memory.total=512MB (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,261] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@43a25848 (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,264] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 17:04:45 kafka | [2024-09-29 17:02:06,269] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 17:04:45 kafka | [2024-09-29 17:02:06,275] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 17:04:45 kafka | [2024-09-29 17:02:06,287] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 17:04:45 kafka | [2024-09-29 17:02:06,287] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 17:04:45 kafka | [2024-09-29 17:02:06,294] INFO Socket connection established, initiating session, client: /172.17.0.7:45712, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 17:04:45 kafka | [2024-09-29 17:02:06,339] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000291810000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 17:04:45 kafka | [2024-09-29 17:02:06,454] INFO Session: 0x100000291810000 closed (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:06,454] INFO EventThread shut down for session: 0x100000291810000 (org.apache.zookeeper.ClientCnxn) 17:04:45 kafka | Using log4j config /etc/kafka/log4j.properties 17:04:45 kafka | ===> Launching ... 17:04:45 kafka | ===> Launching kafka ... 17:04:45 kafka | [2024-09-29 17:02:07,060] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 17:04:45 kafka | [2024-09-29 17:02:07,289] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 17:04:45 kafka | [2024-09-29 17:02:07,357] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 17:04:45 kafka | [2024-09-29 17:02:07,359] INFO starting (kafka.server.KafkaServer) 17:04:45 kafka | [2024-09-29 17:02:07,359] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 17:04:45 kafka | [2024-09-29 17:02:07,371] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 17:04:45 kafka | [2024-09-29 17:02:07,375] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,375] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,375] INFO Client environment:java.version=17.0.12 (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,375] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,375] INFO Client environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,375] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/protobuf-java-3.23.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-raft-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-runtime-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/connect-json-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/netty-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-server-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-clients-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/scala-library-2.13.12.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-shell-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.110.Final.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.110.Final.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.110.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.12.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/trogdor-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-tools-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.7.1-ccs.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,376] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,376] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,376] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,376] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,376] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,376] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,376] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,376] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,376] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,377] INFO Client environment:os.memory.free=986MB (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,377] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,377] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,378] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@609bcfb6 (org.apache.zookeeper.ZooKeeper) 17:04:45 kafka | [2024-09-29 17:02:07,382] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 17:04:45 kafka | [2024-09-29 17:02:07,387] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 17:04:45 kafka | [2024-09-29 17:02:07,388] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 17:04:45 kafka | [2024-09-29 17:02:07,392] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 17:04:45 kafka | [2024-09-29 17:02:07,397] INFO Socket connection established, initiating session, client: /172.17.0.7:45714, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 17:04:45 kafka | [2024-09-29 17:02:07,406] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000291810001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 17:04:45 kafka | [2024-09-29 17:02:07,410] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 17:04:45 kafka | [2024-09-29 17:02:07,753] INFO Cluster ID = o98Zehj3SPmhzfdRi49uhg (kafka.server.KafkaServer) 17:04:45 kafka | [2024-09-29 17:02:07,811] INFO KafkaConfig values: 17:04:45 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 17:04:45 kafka | alter.config.policy.class.name = null 17:04:45 kafka | alter.log.dirs.replication.quota.window.num = 11 17:04:45 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 17:04:45 kafka | authorizer.class.name = 17:04:45 kafka | auto.create.topics.enable = true 17:04:45 kafka | auto.include.jmx.reporter = true 17:04:45 kafka | auto.leader.rebalance.enable = true 17:04:45 kafka | background.threads = 10 17:04:45 kafka | broker.heartbeat.interval.ms = 2000 17:04:45 kafka | broker.id = 1 17:04:45 kafka | broker.id.generation.enable = true 17:04:45 kafka | broker.rack = null 17:04:45 kafka | broker.session.timeout.ms = 9000 17:04:45 kafka | client.quota.callback.class = null 17:04:45 kafka | compression.type = producer 17:04:45 kafka | connection.failed.authentication.delay.ms = 100 17:04:45 kafka | connections.max.idle.ms = 600000 17:04:45 kafka | connections.max.reauth.ms = 0 17:04:45 kafka | control.plane.listener.name = null 17:04:45 kafka | controlled.shutdown.enable = true 17:04:45 kafka | controlled.shutdown.max.retries = 3 17:04:45 kafka | controlled.shutdown.retry.backoff.ms = 5000 17:04:45 kafka | controller.listener.names = null 17:04:45 kafka | controller.quorum.append.linger.ms = 25 17:04:45 kafka | controller.quorum.election.backoff.max.ms = 1000 17:04:45 kafka | controller.quorum.election.timeout.ms = 1000 17:04:45 kafka | controller.quorum.fetch.timeout.ms = 2000 17:04:45 kafka | controller.quorum.request.timeout.ms = 2000 17:04:45 kafka | controller.quorum.retry.backoff.ms = 20 17:04:45 kafka | controller.quorum.voters = [] 17:04:45 kafka | controller.quota.window.num = 11 17:04:45 kafka | controller.quota.window.size.seconds = 1 17:04:45 kafka | controller.socket.timeout.ms = 30000 17:04:45 kafka | create.topic.policy.class.name = null 17:04:45 kafka | default.replication.factor = 1 17:04:45 kafka | delegation.token.expiry.check.interval.ms = 3600000 17:04:45 kafka | delegation.token.expiry.time.ms = 86400000 17:04:45 kafka | delegation.token.master.key = null 17:04:45 kafka | delegation.token.max.lifetime.ms = 604800000 17:04:45 kafka | delegation.token.secret.key = null 17:04:45 kafka | delete.records.purgatory.purge.interval.requests = 1 17:04:45 kafka | delete.topic.enable = true 17:04:45 kafka | early.start.listeners = null 17:04:45 kafka | eligible.leader.replicas.enable = false 17:04:45 kafka | fetch.max.bytes = 57671680 17:04:45 kafka | fetch.purgatory.purge.interval.requests = 1000 17:04:45 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor] 17:04:45 kafka | group.consumer.heartbeat.interval.ms = 5000 17:04:45 kafka | group.consumer.max.heartbeat.interval.ms = 15000 17:04:45 kafka | group.consumer.max.session.timeout.ms = 60000 17:04:45 kafka | group.consumer.max.size = 2147483647 17:04:45 kafka | group.consumer.min.heartbeat.interval.ms = 5000 17:04:45 kafka | group.consumer.min.session.timeout.ms = 45000 17:04:45 kafka | group.consumer.session.timeout.ms = 45000 17:04:45 kafka | group.coordinator.new.enable = false 17:04:45 kafka | group.coordinator.rebalance.protocols = [classic] 17:04:45 kafka | group.coordinator.threads = 1 17:04:45 kafka | group.initial.rebalance.delay.ms = 3000 17:04:45 kafka | group.max.session.timeout.ms = 1800000 17:04:45 kafka | group.max.size = 2147483647 17:04:45 kafka | group.min.session.timeout.ms = 6000 17:04:45 kafka | initial.broker.registration.timeout.ms = 60000 17:04:45 kafka | inter.broker.listener.name = PLAINTEXT 17:04:45 kafka | inter.broker.protocol.version = 3.7-IV4 17:04:45 kafka | kafka.metrics.polling.interval.secs = 10 17:04:45 kafka | kafka.metrics.reporters = [] 17:04:45 kafka | leader.imbalance.check.interval.seconds = 300 17:04:45 kafka | leader.imbalance.per.broker.percentage = 10 17:04:45 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 17:04:45 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 17:04:45 kafka | log.cleaner.backoff.ms = 15000 17:04:45 kafka | log.cleaner.dedupe.buffer.size = 134217728 17:04:45 kafka | log.cleaner.delete.retention.ms = 86400000 17:04:45 kafka | log.cleaner.enable = true 17:04:45 kafka | log.cleaner.io.buffer.load.factor = 0.9 17:04:45 kafka | log.cleaner.io.buffer.size = 524288 17:04:45 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 17:04:45 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 17:04:45 kafka | log.cleaner.min.cleanable.ratio = 0.5 17:04:45 kafka | log.cleaner.min.compaction.lag.ms = 0 17:04:45 kafka | log.cleaner.threads = 1 17:04:45 kafka | log.cleanup.policy = [delete] 17:04:45 kafka | log.dir = /tmp/kafka-logs 17:04:45 kafka | log.dirs = /var/lib/kafka/data 17:04:45 kafka | log.flush.interval.messages = 9223372036854775807 17:04:45 kafka | log.flush.interval.ms = null 17:04:45 kafka | log.flush.offset.checkpoint.interval.ms = 60000 17:04:45 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 17:04:45 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 17:04:45 kafka | log.index.interval.bytes = 4096 17:04:45 kafka | log.index.size.max.bytes = 10485760 17:04:45 kafka | log.local.retention.bytes = -2 17:04:45 kafka | log.local.retention.ms = -2 17:04:45 kafka | log.message.downconversion.enable = true 17:04:45 kafka | log.message.format.version = 3.0-IV1 17:04:45 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 17:04:45 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 17:04:45 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 17:04:45 kafka | log.message.timestamp.type = CreateTime 17:04:45 kafka | log.preallocate = false 17:04:45 kafka | log.retention.bytes = -1 17:04:45 kafka | log.retention.check.interval.ms = 300000 17:04:45 kafka | log.retention.hours = 168 17:04:45 kafka | log.retention.minutes = null 17:04:45 kafka | log.retention.ms = null 17:04:45 kafka | log.roll.hours = 168 17:04:45 kafka | log.roll.jitter.hours = 0 17:04:45 kafka | log.roll.jitter.ms = null 17:04:45 kafka | log.roll.ms = null 17:04:45 kafka | log.segment.bytes = 1073741824 17:04:45 kafka | log.segment.delete.delay.ms = 60000 17:04:45 kafka | max.connection.creation.rate = 2147483647 17:04:45 kafka | max.connections = 2147483647 17:04:45 kafka | max.connections.per.ip = 2147483647 17:04:45 kafka | max.connections.per.ip.overrides = 17:04:45 kafka | max.incremental.fetch.session.cache.slots = 1000 17:04:45 kafka | message.max.bytes = 1048588 17:04:45 kafka | metadata.log.dir = null 17:04:45 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 17:04:45 kafka | metadata.log.max.snapshot.interval.ms = 3600000 17:04:45 kafka | metadata.log.segment.bytes = 1073741824 17:04:45 kafka | metadata.log.segment.min.bytes = 8388608 17:04:45 kafka | metadata.log.segment.ms = 604800000 17:04:45 kafka | metadata.max.idle.interval.ms = 500 17:04:45 kafka | metadata.max.retention.bytes = 104857600 17:04:45 kafka | metadata.max.retention.ms = 604800000 17:04:45 kafka | metric.reporters = [] 17:04:45 kafka | metrics.num.samples = 2 17:04:45 kafka | metrics.recording.level = INFO 17:04:45 kafka | metrics.sample.window.ms = 30000 17:04:45 kafka | min.insync.replicas = 1 17:04:45 kafka | node.id = 1 17:04:45 kafka | num.io.threads = 8 17:04:45 kafka | num.network.threads = 3 17:04:45 kafka | num.partitions = 1 17:04:45 kafka | num.recovery.threads.per.data.dir = 1 17:04:45 kafka | num.replica.alter.log.dirs.threads = null 17:04:45 kafka | num.replica.fetchers = 1 17:04:45 kafka | offset.metadata.max.bytes = 4096 17:04:45 kafka | offsets.commit.required.acks = -1 17:04:45 kafka | offsets.commit.timeout.ms = 5000 17:04:45 kafka | offsets.load.buffer.size = 5242880 17:04:45 kafka | offsets.retention.check.interval.ms = 600000 17:04:45 kafka | offsets.retention.minutes = 10080 17:04:45 kafka | offsets.topic.compression.codec = 0 17:04:45 kafka | offsets.topic.num.partitions = 50 17:04:45 kafka | offsets.topic.replication.factor = 1 17:04:45 kafka | offsets.topic.segment.bytes = 104857600 17:04:45 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 17:04:45 kafka | password.encoder.iterations = 4096 17:04:45 kafka | password.encoder.key.length = 128 17:04:45 kafka | password.encoder.keyfactory.algorithm = null 17:04:45 kafka | password.encoder.old.secret = null 17:04:45 kafka | password.encoder.secret = null 17:04:45 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 17:04:45 kafka | process.roles = [] 17:04:45 kafka | producer.id.expiration.check.interval.ms = 600000 17:04:45 kafka | producer.id.expiration.ms = 86400000 17:04:45 kafka | producer.purgatory.purge.interval.requests = 1000 17:04:45 kafka | queued.max.request.bytes = -1 17:04:45 kafka | queued.max.requests = 500 17:04:45 kafka | quota.window.num = 11 17:04:45 kafka | quota.window.size.seconds = 1 17:04:45 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 17:04:45 kafka | remote.log.manager.task.interval.ms = 30000 17:04:45 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 17:04:45 kafka | remote.log.manager.task.retry.backoff.ms = 500 17:04:45 kafka | remote.log.manager.task.retry.jitter = 0.2 17:04:45 kafka | remote.log.manager.thread.pool.size = 10 17:04:45 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 17:04:45 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 17:04:45 kafka | remote.log.metadata.manager.class.path = null 17:04:45 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 17:04:45 kafka | remote.log.metadata.manager.listener.name = null 17:04:45 kafka | remote.log.reader.max.pending.tasks = 100 17:04:45 kafka | remote.log.reader.threads = 10 17:04:45 kafka | remote.log.storage.manager.class.name = null 17:04:45 kafka | remote.log.storage.manager.class.path = null 17:04:45 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 17:04:45 kafka | remote.log.storage.system.enable = false 17:04:45 kafka | replica.fetch.backoff.ms = 1000 17:04:45 kafka | replica.fetch.max.bytes = 1048576 17:04:45 kafka | replica.fetch.min.bytes = 1 17:04:45 kafka | replica.fetch.response.max.bytes = 10485760 17:04:45 kafka | replica.fetch.wait.max.ms = 500 17:04:45 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 17:04:45 kafka | replica.lag.time.max.ms = 30000 17:04:45 kafka | replica.selector.class = null 17:04:45 kafka | replica.socket.receive.buffer.bytes = 65536 17:04:45 kafka | replica.socket.timeout.ms = 30000 17:04:45 kafka | replication.quota.window.num = 11 17:04:45 kafka | replication.quota.window.size.seconds = 1 17:04:45 kafka | request.timeout.ms = 30000 17:04:45 kafka | reserved.broker.max.id = 1000 17:04:45 kafka | sasl.client.callback.handler.class = null 17:04:45 kafka | sasl.enabled.mechanisms = [GSSAPI] 17:04:45 kafka | sasl.jaas.config = null 17:04:45 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:45 kafka | sasl.kerberos.min.time.before.relogin = 60000 17:04:45 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 17:04:45 kafka | sasl.kerberos.service.name = null 17:04:45 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:45 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:45 kafka | sasl.login.callback.handler.class = null 17:04:45 kafka | sasl.login.class = null 17:04:45 kafka | sasl.login.connect.timeout.ms = null 17:04:45 kafka | sasl.login.read.timeout.ms = null 17:04:45 kafka | sasl.login.refresh.buffer.seconds = 300 17:04:45 kafka | sasl.login.refresh.min.period.seconds = 60 17:04:45 kafka | sasl.login.refresh.window.factor = 0.8 17:04:45 kafka | sasl.login.refresh.window.jitter = 0.05 17:04:45 kafka | sasl.login.retry.backoff.max.ms = 10000 17:04:45 kafka | sasl.login.retry.backoff.ms = 100 17:04:45 kafka | sasl.mechanism.controller.protocol = GSSAPI 17:04:45 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 17:04:45 kafka | sasl.oauthbearer.clock.skew.seconds = 30 17:04:45 kafka | sasl.oauthbearer.expected.audience = null 17:04:45 kafka | sasl.oauthbearer.expected.issuer = null 17:04:45 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:45 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:45 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:45 kafka | sasl.oauthbearer.jwks.endpoint.url = null 17:04:45 kafka | sasl.oauthbearer.scope.claim.name = scope 17:04:45 kafka | sasl.oauthbearer.sub.claim.name = sub 17:04:45 kafka | sasl.oauthbearer.token.endpoint.url = null 17:04:45 kafka | sasl.server.callback.handler.class = null 17:04:45 kafka | sasl.server.max.receive.size = 524288 17:04:45 kafka | security.inter.broker.protocol = PLAINTEXT 17:04:45 kafka | security.providers = null 17:04:45 kafka | server.max.startup.time.ms = 9223372036854775807 17:04:45 kafka | socket.connection.setup.timeout.max.ms = 30000 17:04:45 kafka | socket.connection.setup.timeout.ms = 10000 17:04:45 kafka | socket.listen.backlog.size = 50 17:04:45 kafka | socket.receive.buffer.bytes = 102400 17:04:45 kafka | socket.request.max.bytes = 104857600 17:04:45 kafka | socket.send.buffer.bytes = 102400 17:04:45 kafka | ssl.allow.dn.changes = false 17:04:45 kafka | ssl.allow.san.changes = false 17:04:45 kafka | ssl.cipher.suites = [] 17:04:45 kafka | ssl.client.auth = none 17:04:45 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:45 kafka | ssl.endpoint.identification.algorithm = https 17:04:45 kafka | ssl.engine.factory.class = null 17:04:45 kafka | ssl.key.password = null 17:04:45 kafka | ssl.keymanager.algorithm = SunX509 17:04:45 kafka | ssl.keystore.certificate.chain = null 17:04:45 kafka | ssl.keystore.key = null 17:04:45 kafka | ssl.keystore.location = null 17:04:45 kafka | ssl.keystore.password = null 17:04:45 kafka | ssl.keystore.type = JKS 17:04:45 kafka | ssl.principal.mapping.rules = DEFAULT 17:04:45 kafka | ssl.protocol = TLSv1.3 17:04:45 kafka | ssl.provider = null 17:04:45 kafka | ssl.secure.random.implementation = null 17:04:45 kafka | ssl.trustmanager.algorithm = PKIX 17:04:45 kafka | ssl.truststore.certificates = null 17:04:45 kafka | ssl.truststore.location = null 17:04:45 kafka | ssl.truststore.password = null 17:04:45 kafka | ssl.truststore.type = JKS 17:04:45 kafka | telemetry.max.bytes = 1048576 17:04:45 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 17:04:45 kafka | transaction.max.timeout.ms = 900000 17:04:45 kafka | transaction.partition.verification.enable = true 17:04:45 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 17:04:45 kafka | transaction.state.log.load.buffer.size = 5242880 17:04:45 kafka | transaction.state.log.min.isr = 2 17:04:45 kafka | transaction.state.log.num.partitions = 50 17:04:45 kafka | transaction.state.log.replication.factor = 3 17:04:45 kafka | transaction.state.log.segment.bytes = 104857600 17:04:45 kafka | transactional.id.expiration.ms = 604800000 17:04:45 kafka | unclean.leader.election.enable = false 17:04:45 kafka | unstable.api.versions.enable = false 17:04:45 kafka | unstable.metadata.versions.enable = false 17:04:45 kafka | zookeeper.clientCnxnSocket = null 17:04:45 kafka | zookeeper.connect = zookeeper:2181 17:04:45 kafka | zookeeper.connection.timeout.ms = null 17:04:45 kafka | zookeeper.max.in.flight.requests = 10 17:04:45 kafka | zookeeper.metadata.migration.enable = false 17:04:45 kafka | zookeeper.metadata.migration.min.batch.size = 200 17:04:45 kafka | zookeeper.session.timeout.ms = 18000 17:04:45 kafka | zookeeper.set.acl = false 17:04:45 kafka | zookeeper.ssl.cipher.suites = null 17:04:45 kafka | zookeeper.ssl.client.enable = false 17:04:45 kafka | zookeeper.ssl.crl.enable = false 17:04:45 kafka | zookeeper.ssl.enabled.protocols = null 17:04:45 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 17:04:45 kafka | zookeeper.ssl.keystore.location = null 17:04:45 kafka | zookeeper.ssl.keystore.password = null 17:04:45 kafka | zookeeper.ssl.keystore.type = null 17:04:45 kafka | zookeeper.ssl.ocsp.enable = false 17:04:45 kafka | zookeeper.ssl.protocol = TLSv1.2 17:04:45 kafka | zookeeper.ssl.truststore.location = null 17:04:45 kafka | zookeeper.ssl.truststore.password = null 17:04:45 kafka | zookeeper.ssl.truststore.type = null 17:04:45 kafka | (kafka.server.KafkaConfig) 17:04:45 kafka | [2024-09-29 17:02:07,846] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 17:04:45 kafka | [2024-09-29 17:02:07,847] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 17:04:45 kafka | [2024-09-29 17:02:07,848] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 17:04:45 kafka | [2024-09-29 17:02:07,850] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 17:04:45 kafka | [2024-09-29 17:02:07,856] INFO [KafkaServer id=1] Rewriting /var/lib/kafka/data/meta.properties (kafka.server.KafkaServer) 17:04:45 kafka | [2024-09-29 17:02:07,929] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:07,936] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:07,946] INFO Loaded 0 logs in 16ms (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:07,948] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:07,949] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:07,967] INFO Starting the log cleaner (kafka.log.LogCleaner) 17:04:45 kafka | [2024-09-29 17:02:08,025] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 17:04:45 kafka | [2024-09-29 17:02:08,041] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 17:04:45 kafka | [2024-09-29 17:02:08,055] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 17:04:45 kafka | [2024-09-29 17:02:08,078] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) 17:04:45 kafka | [2024-09-29 17:02:08,355] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 17:04:45 kafka | [2024-09-29 17:02:08,372] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 17:04:45 kafka | [2024-09-29 17:02:08,372] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 17:04:45 kafka | [2024-09-29 17:02:08,376] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 17:04:45 kafka | [2024-09-29 17:02:08,379] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) 17:04:45 kafka | [2024-09-29 17:02:08,402] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:45 kafka | [2024-09-29 17:02:08,403] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:45 kafka | [2024-09-29 17:02:08,406] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:45 kafka | [2024-09-29 17:02:08,408] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:45 kafka | [2024-09-29 17:02:08,408] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:45 kafka | [2024-09-29 17:02:08,422] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 17:04:45 kafka | [2024-09-29 17:02:08,424] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 17:04:45 kafka | [2024-09-29 17:02:08,459] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 17:04:45 kafka | [2024-09-29 17:02:08,493] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1727629328471,1727629328471,1,0,0,72057605069012993,258,0,27 17:04:45 kafka | (kafka.zk.KafkaZkClient) 17:04:45 kafka | [2024-09-29 17:02:08,494] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 17:04:45 kafka | [2024-09-29 17:02:08,540] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 17:04:45 kafka | [2024-09-29 17:02:08,546] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:45 kafka | [2024-09-29 17:02:08,553] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:45 kafka | [2024-09-29 17:02:08,553] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:45 kafka | [2024-09-29 17:02:08,563] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 17:04:45 kafka | [2024-09-29 17:02:08,571] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:08,571] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,575] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,577] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:08,582] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 17:04:45 kafka | [2024-09-29 17:02:08,598] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 17:04:45 kafka | [2024-09-29 17:02:08,603] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 17:04:45 kafka | [2024-09-29 17:02:08,603] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 17:04:45 kafka | [2024-09-29 17:02:08,603] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.7-IV4, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 17:04:45 kafka | [2024-09-29 17:02:08,603] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,608] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,639] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,641] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,657] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,661] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,664] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:04:45 kafka | [2024-09-29 17:02:08,667] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 17:04:45 kafka | [2024-09-29 17:02:08,674] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 17:04:45 kafka | [2024-09-29 17:02:08,674] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,674] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,675] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,675] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,678] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,678] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,678] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,678] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 17:04:45 kafka | [2024-09-29 17:02:08,680] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,683] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:08,689] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 17:04:45 kafka | [2024-09-29 17:02:08,690] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 17:04:45 kafka | [2024-09-29 17:02:08,692] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 17:04:45 kafka | [2024-09-29 17:02:08,692] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 17:04:45 kafka | [2024-09-29 17:02:08,692] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 17:04:45 kafka | [2024-09-29 17:02:08,693] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 17:04:45 kafka | [2024-09-29 17:02:08,695] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 17:04:45 kafka | [2024-09-29 17:02:08,695] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,696] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 17:04:45 kafka | [2024-09-29 17:02:08,702] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,702] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,702] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,702] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,703] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,709] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 17:04:45 kafka | [2024-09-29 17:02:08,711] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.7:9092) could not be established. Node may not be available. (org.apache.kafka.clients.NetworkClient) 17:04:45 kafka | [2024-09-29 17:02:08,713] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) 17:04:45 kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. 17:04:45 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71) 17:04:45 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) 17:04:45 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) 17:04:45 kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:135) 17:04:45 kafka | [2024-09-29 17:02:08,714] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) 17:04:45 kafka | [2024-09-29 17:02:08,714] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:08,717] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 17:04:45 kafka | [2024-09-29 17:02:08,720] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 17:04:45 kafka | [2024-09-29 17:02:08,726] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 17:04:45 kafka | [2024-09-29 17:02:08,731] INFO Kafka version: 7.7.1-ccs (org.apache.kafka.common.utils.AppInfoParser) 17:04:45 kafka | [2024-09-29 17:02:08,731] INFO Kafka commitId: 91d86f33092378c89731b4a9cf1ce5db831a2b07 (org.apache.kafka.common.utils.AppInfoParser) 17:04:45 kafka | [2024-09-29 17:02:08,731] INFO Kafka startTimeMs: 1727629328728 (org.apache.kafka.common.utils.AppInfoParser) 17:04:45 kafka | [2024-09-29 17:02:08,733] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 17:04:45 kafka | [2024-09-29 17:02:08,818] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 17:04:45 kafka | [2024-09-29 17:02:08,884] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new ZK controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread) 17:04:45 kafka | [2024-09-29 17:02:08,891] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new ZK controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread) 17:04:45 kafka | [2024-09-29 17:02:08,901] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:13,717] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:13,717] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:42,431] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:42,438] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:42,439] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 17:04:45 kafka | [2024-09-29 17:02:42,440] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 17:04:45 kafka | [2024-09-29 17:02:42,474] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(KKfmCROVTHCIWyPhAwsJIQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(8Jq_d7s3Q_69HCZgy0LCGg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:42,475] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 17:04:45 kafka | [2024-09-29 17:02:42,477] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,477] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,477] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,478] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,479] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,480] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,485] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,486] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,487] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,487] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,644] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,646] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,646] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,646] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,652] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,653] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,662] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,662] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,662] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,662] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,662] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,662] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,662] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,662] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,662] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,662] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,662] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,663] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,664] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,672] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,674] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,674] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,674] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,674] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,674] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,674] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,674] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,675] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,675] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,675] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,675] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,675] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,675] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,675] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,675] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,675] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,675] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,676] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,676] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,676] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,676] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,676] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,676] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,676] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,676] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,676] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,677] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,677] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,677] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,677] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,677] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,677] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,678] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,678] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,678] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,678] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,678] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,679] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,679] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,679] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,679] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,679] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,679] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,679] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,679] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,731] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,731] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,731] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,731] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,731] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,731] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,731] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,731] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,731] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,731] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,731] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,731] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,731] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,731] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,731] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,731] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,732] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,733] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,733] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,733] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,738] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,738] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,739] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,739] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,739] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,739] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,739] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,740] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,740] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,740] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,740] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,740] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,741] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 17:04:45 kafka | [2024-09-29 17:02:42,741] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,813] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:42,824] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:42,828] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,830] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,832] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,853] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:42,853] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:42,858] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,858] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,858] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,868] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:42,869] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:42,870] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,870] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,870] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,876] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:42,877] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:42,877] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,877] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,877] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,884] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:42,885] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:42,885] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,885] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,885] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,903] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:42,904] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:42,904] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,904] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,905] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,916] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:42,917] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:42,917] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,917] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,917] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,939] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:42,939] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:42,939] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,939] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,939] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,946] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:42,947] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:42,947] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,947] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,947] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,965] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:42,968] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:42,968] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,968] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,968] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,980] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:42,980] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:42,981] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,981] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,981] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,988] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:42,988] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:42,988] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,988] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,988] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:42,996] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:42,996] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:42,997] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,997] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:42,997] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,008] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,009] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,009] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,009] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,010] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,017] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,018] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,018] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,018] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,020] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,044] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,044] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,044] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,045] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,045] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,055] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,056] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,056] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,056] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,056] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,066] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,066] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,066] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,066] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,068] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,076] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,076] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,076] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,076] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,076] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,096] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,102] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,102] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,102] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,103] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,119] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,120] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,120] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,120] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,120] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,156] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,157] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,157] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,157] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,157] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,164] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,164] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,164] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,164] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,164] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,170] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,170] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,170] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,170] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,170] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,175] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,176] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,176] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,176] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,176] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,183] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,183] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,183] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,183] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,183] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,193] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,194] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,194] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,194] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,194] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,205] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,205] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,206] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,206] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,206] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,212] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,221] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,222] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,222] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,222] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,228] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,228] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,228] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,228] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,228] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,237] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,237] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,237] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,237] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,237] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,254] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,255] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,255] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,255] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,255] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,265] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,266] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,267] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,267] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,267] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,274] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,274] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,275] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,275] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,275] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,283] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,283] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,283] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,283] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,283] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(KKfmCROVTHCIWyPhAwsJIQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,290] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,291] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,291] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,291] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,291] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,299] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,299] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,299] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,299] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,300] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,307] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,308] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,308] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,308] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,308] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,314] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,314] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,315] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,315] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,315] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,322] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,322] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,322] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,322] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,323] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,329] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,330] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,330] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,330] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,331] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,345] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,345] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,346] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,346] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,346] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,353] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,355] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,356] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,356] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,356] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,364] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,365] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,365] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,365] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,365] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,373] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,374] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,374] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,374] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,374] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,384] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,385] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,385] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,385] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,385] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,393] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,394] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,394] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,394] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,394] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,402] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,402] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,402] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,402] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,402] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,409] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,410] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,410] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,410] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,410] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,417] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,417] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,418] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,418] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,418] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,425] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:04:45 kafka | [2024-09-29 17:02:43,426] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:04:45 kafka | [2024-09-29 17:02:43,426] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,426] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 17:04:45 kafka | [2024-09-29 17:02:43,426] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(8Jq_d7s3Q_69HCZgy0LCGg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,446] INFO [Broker id=1] Finished LeaderAndIsr request in 777ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 8 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,451] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,451] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,451] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,451] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,451] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,451] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,451] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,451] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,452] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,452] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=8Jq_d7s3Q_69HCZgy0LCGg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=KKfmCROVTHCIWyPhAwsJIQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,452] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,452] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,452] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,452] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,452] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,452] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,461] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,462] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,463] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,465] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:04:45 kafka | [2024-09-29 17:02:43,534] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-d6c043eb-ab29-40d1-9783-0bdadc82bc9d and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,560] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-d6c043eb-ab29-40d1-9783-0bdadc82bc9d with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,570] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 2cd2bf8c-cde2-4801-93ac-009d1b720a1d in Empty state. Created a new member id consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3-3ceef4f0-12a3-4015-a16f-5a374933d47e and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:43,573] INFO [GroupCoordinator 1]: Preparing to rebalance group 2cd2bf8c-cde2-4801-93ac-009d1b720a1d in state PreparingRebalance with old generation 0 (__consumer_offsets-48) (reason: Adding new member consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3-3ceef4f0-12a3-4015-a16f-5a374933d47e with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:44,380] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 58d69853-23cf-4753-9963-1fc883efa8c8 in Empty state. Created a new member id consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2-5a134043-d934-4f04-a48e-7072330d0e62 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:44,383] INFO [GroupCoordinator 1]: Preparing to rebalance group 58d69853-23cf-4753-9963-1fc883efa8c8 in state PreparingRebalance with old generation 0 (__consumer_offsets-40) (reason: Adding new member consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2-5a134043-d934-4f04-a48e-7072330d0e62 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:46,569] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:46,573] INFO [GroupCoordinator 1]: Stabilized group 2cd2bf8c-cde2-4801-93ac-009d1b720a1d generation 1 (__consumer_offsets-48) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:46,592] INFO [GroupCoordinator 1]: Assignment received from leader consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3-3ceef4f0-12a3-4015-a16f-5a374933d47e for group 2cd2bf8c-cde2-4801-93ac-009d1b720a1d for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:46,592] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-d6c043eb-ab29-40d1-9783-0bdadc82bc9d for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:47,384] INFO [GroupCoordinator 1]: Stabilized group 58d69853-23cf-4753-9963-1fc883efa8c8 generation 1 (__consumer_offsets-40) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:04:45 kafka | [2024-09-29 17:02:47,400] INFO [GroupCoordinator 1]: Assignment received from leader consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2-5a134043-d934-4f04-a48e-7072330d0e62 for group 58d69853-23cf-4753-9963-1fc883efa8c8 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:04:45 =================================== 17:04:45 ======== Logs from mariadb ======== 17:04:45 mariadb | 2024-09-29 17:02:04+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 17:04:45 mariadb | 2024-09-29 17:02:04+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 17:04:45 mariadb | 2024-09-29 17:02:04+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 17:04:45 mariadb | 2024-09-29 17:02:04+00:00 [Note] [Entrypoint]: Initializing database files 17:04:45 mariadb | 2024-09-29 17:02:04 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 17:04:45 mariadb | 2024-09-29 17:02:04 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 17:04:45 mariadb | 2024-09-29 17:02:04 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 17:04:45 mariadb | 17:04:45 mariadb | 17:04:45 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 17:04:45 mariadb | To do so, start the server, then issue the following command: 17:04:45 mariadb | 17:04:45 mariadb | '/usr/bin/mysql_secure_installation' 17:04:45 mariadb | 17:04:45 mariadb | which will also give you the option of removing the test 17:04:45 mariadb | databases and anonymous user created by default. This is 17:04:45 mariadb | strongly recommended for production servers. 17:04:45 mariadb | 17:04:45 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 17:04:45 mariadb | 17:04:45 mariadb | Please report any problems at https://mariadb.org/jira 17:04:45 mariadb | 17:04:45 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 17:04:45 mariadb | 17:04:45 mariadb | Consider joining MariaDB's strong and vibrant community: 17:04:45 mariadb | https://mariadb.org/get-involved/ 17:04:45 mariadb | 17:04:45 mariadb | 2024-09-29 17:02:06+00:00 [Note] [Entrypoint]: Database files initialized 17:04:45 mariadb | 2024-09-29 17:02:06+00:00 [Note] [Entrypoint]: Starting temporary server 17:04:45 mariadb | 2024-09-29 17:02:06+00:00 [Note] [Entrypoint]: Waiting for server startup 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 97 ... 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Note] InnoDB: Number of transaction pools: 1 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Note] InnoDB: Completed initialization of buffer pool 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Note] InnoDB: 128 rollback segments are active. 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Note] InnoDB: log sequence number 46606; transaction id 14 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Note] Plugin 'FEEDBACK' is disabled. 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 17:04:45 mariadb | 2024-09-29 17:02:06 0 [Note] mariadbd: ready for connections. 17:04:45 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 17:04:45 mariadb | 2024-09-29 17:02:07+00:00 [Note] [Entrypoint]: Temporary server started. 17:04:45 mariadb | 2024-09-29 17:02:08+00:00 [Note] [Entrypoint]: Creating user policy_user 17:04:45 mariadb | 2024-09-29 17:02:08+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 17:04:45 mariadb | 17:04:45 mariadb | 2024-09-29 17:02:08+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 17:04:45 mariadb | 17:04:45 mariadb | 2024-09-29 17:02:08+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 17:04:45 mariadb | #!/bin/bash -xv 17:04:45 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 17:04:45 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 17:04:45 mariadb | # 17:04:45 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 17:04:45 mariadb | # you may not use this file except in compliance with the License. 17:04:45 mariadb | # You may obtain a copy of the License at 17:04:45 mariadb | # 17:04:45 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 17:04:45 mariadb | # 17:04:45 mariadb | # Unless required by applicable law or agreed to in writing, software 17:04:45 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 17:04:45 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 17:04:45 mariadb | # See the License for the specific language governing permissions and 17:04:45 mariadb | # limitations under the License. 17:04:45 mariadb | 17:04:45 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:04:45 mariadb | do 17:04:45 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 17:04:45 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 17:04:45 mariadb | done 17:04:45 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:04:45 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 17:04:45 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:04:45 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:04:45 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 17:04:45 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:04:45 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:04:45 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 17:04:45 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:04:45 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:04:45 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 17:04:45 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:04:45 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:04:45 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 17:04:45 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:04:45 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:04:45 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 17:04:45 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:04:45 mariadb | 17:04:45 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 17:04:45 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 17:04:45 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 17:04:45 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 17:04:45 mariadb | 17:04:45 mariadb | 2024-09-29 17:02:09+00:00 [Note] [Entrypoint]: Stopping temporary server 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: FTS optimize thread exiting. 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: Starting shutdown... 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: Buffer pool(s) dump completed at 240929 17:02:09 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: Shutdown completed; log sequence number 321066; transaction id 298 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] mariadbd: Shutdown complete 17:04:45 mariadb | 17:04:45 mariadb | 2024-09-29 17:02:09+00:00 [Note] [Entrypoint]: Temporary server stopped 17:04:45 mariadb | 17:04:45 mariadb | 2024-09-29 17:02:09+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 17:04:45 mariadb | 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: Number of transaction pools: 1 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: Completed initialization of buffer pool 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: 128 rollback segments are active. 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: log sequence number 321066; transaction id 299 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] Plugin 'FEEDBACK' is disabled. 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] Server socket created on IP: '0.0.0.0'. 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] Server socket created on IP: '::'. 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] mariadbd: ready for connections. 17:04:45 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 17:04:45 mariadb | 2024-09-29 17:02:09 0 [Note] InnoDB: Buffer pool(s) load completed at 240929 17:02:09 17:04:45 mariadb | 2024-09-29 17:02:10 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 17:04:45 mariadb | 2024-09-29 17:02:10 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 17:04:45 mariadb | 2024-09-29 17:02:10 28 [Warning] Aborted connection 28 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 17:04:45 mariadb | 2024-09-29 17:02:10 38 [Warning] Aborted connection 38 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 17:04:45 =================================== 17:04:45 ======== Logs from apex-pdp ======== 17:04:45 policy-apex-pdp | Waiting for mariadb port 3306... 17:04:45 policy-apex-pdp | mariadb (172.17.0.5:3306) open 17:04:45 policy-apex-pdp | Waiting for kafka port 9092... 17:04:45 policy-apex-pdp | kafka (172.17.0.7:9092) open 17:04:45 policy-apex-pdp | Waiting for pap port 6969... 17:04:45 policy-apex-pdp | pap (172.17.0.10:6969) open 17:04:45 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.505+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.721+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:45 policy-apex-pdp | allow.auto.create.topics = true 17:04:45 policy-apex-pdp | auto.commit.interval.ms = 5000 17:04:45 policy-apex-pdp | auto.include.jmx.reporter = true 17:04:45 policy-apex-pdp | auto.offset.reset = latest 17:04:45 policy-apex-pdp | bootstrap.servers = [kafka:9092] 17:04:45 policy-apex-pdp | check.crcs = true 17:04:45 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 17:04:45 policy-apex-pdp | client.id = consumer-58d69853-23cf-4753-9963-1fc883efa8c8-1 17:04:45 policy-apex-pdp | client.rack = 17:04:45 policy-apex-pdp | connections.max.idle.ms = 540000 17:04:45 policy-apex-pdp | default.api.timeout.ms = 60000 17:04:45 policy-apex-pdp | enable.auto.commit = true 17:04:45 policy-apex-pdp | exclude.internal.topics = true 17:04:45 policy-apex-pdp | fetch.max.bytes = 52428800 17:04:45 policy-apex-pdp | fetch.max.wait.ms = 500 17:04:45 policy-apex-pdp | fetch.min.bytes = 1 17:04:45 policy-apex-pdp | group.id = 58d69853-23cf-4753-9963-1fc883efa8c8 17:04:45 policy-apex-pdp | group.instance.id = null 17:04:45 policy-apex-pdp | heartbeat.interval.ms = 3000 17:04:45 policy-apex-pdp | interceptor.classes = [] 17:04:45 policy-apex-pdp | internal.leave.group.on.close = true 17:04:45 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:45 policy-apex-pdp | isolation.level = read_uncommitted 17:04:45 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:45 policy-apex-pdp | max.partition.fetch.bytes = 1048576 17:04:45 policy-apex-pdp | max.poll.interval.ms = 300000 17:04:45 policy-apex-pdp | max.poll.records = 500 17:04:45 policy-apex-pdp | metadata.max.age.ms = 300000 17:04:45 policy-apex-pdp | metric.reporters = [] 17:04:45 policy-apex-pdp | metrics.num.samples = 2 17:04:45 policy-apex-pdp | metrics.recording.level = INFO 17:04:45 policy-apex-pdp | metrics.sample.window.ms = 30000 17:04:45 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:45 policy-apex-pdp | receive.buffer.bytes = 65536 17:04:45 policy-apex-pdp | reconnect.backoff.max.ms = 1000 17:04:45 policy-apex-pdp | reconnect.backoff.ms = 50 17:04:45 policy-apex-pdp | request.timeout.ms = 30000 17:04:45 policy-apex-pdp | retry.backoff.ms = 100 17:04:45 policy-apex-pdp | sasl.client.callback.handler.class = null 17:04:45 policy-apex-pdp | sasl.jaas.config = null 17:04:45 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:45 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 17:04:45 policy-apex-pdp | sasl.kerberos.service.name = null 17:04:45 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:45 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:45 policy-apex-pdp | sasl.login.callback.handler.class = null 17:04:45 policy-apex-pdp | sasl.login.class = null 17:04:45 policy-apex-pdp | sasl.login.connect.timeout.ms = null 17:04:45 policy-apex-pdp | sasl.login.read.timeout.ms = null 17:04:45 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 17:04:45 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 17:04:45 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 17:04:45 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 17:04:45 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 17:04:45 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 17:04:45 policy-apex-pdp | sasl.mechanism = GSSAPI 17:04:45 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 17:04:45 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 17:04:45 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 17:04:45 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:45 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:45 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:45 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 17:04:45 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 17:04:45 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 17:04:45 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 17:04:45 policy-apex-pdp | security.protocol = PLAINTEXT 17:04:45 policy-apex-pdp | security.providers = null 17:04:45 policy-apex-pdp | send.buffer.bytes = 131072 17:04:45 policy-apex-pdp | session.timeout.ms = 45000 17:04:45 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 17:04:45 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 17:04:45 policy-apex-pdp | ssl.cipher.suites = null 17:04:45 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:45 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 17:04:45 policy-apex-pdp | ssl.engine.factory.class = null 17:04:45 policy-apex-pdp | ssl.key.password = null 17:04:45 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 17:04:45 policy-apex-pdp | ssl.keystore.certificate.chain = null 17:04:45 policy-apex-pdp | ssl.keystore.key = null 17:04:45 policy-apex-pdp | ssl.keystore.location = null 17:04:45 policy-apex-pdp | ssl.keystore.password = null 17:04:45 policy-apex-pdp | ssl.keystore.type = JKS 17:04:45 policy-apex-pdp | ssl.protocol = TLSv1.3 17:04:45 policy-apex-pdp | ssl.provider = null 17:04:45 policy-apex-pdp | ssl.secure.random.implementation = null 17:04:45 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 17:04:45 policy-apex-pdp | ssl.truststore.certificates = null 17:04:45 policy-apex-pdp | ssl.truststore.location = null 17:04:45 policy-apex-pdp | ssl.truststore.password = null 17:04:45 policy-apex-pdp | ssl.truststore.type = JKS 17:04:45 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:45 policy-apex-pdp | 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.886+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.887+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.887+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1727629363885 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.889+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-58d69853-23cf-4753-9963-1fc883efa8c8-1, groupId=58d69853-23cf-4753-9963-1fc883efa8c8] Subscribed to topic(s): policy-pdp-pap 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.901+00:00|INFO|ServiceManager|main] service manager starting 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.902+00:00|INFO|ServiceManager|main] service manager starting topics 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.903+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=58d69853-23cf-4753-9963-1fc883efa8c8, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.922+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:45 policy-apex-pdp | allow.auto.create.topics = true 17:04:45 policy-apex-pdp | auto.commit.interval.ms = 5000 17:04:45 policy-apex-pdp | auto.include.jmx.reporter = true 17:04:45 policy-apex-pdp | auto.offset.reset = latest 17:04:45 policy-apex-pdp | bootstrap.servers = [kafka:9092] 17:04:45 policy-apex-pdp | check.crcs = true 17:04:45 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 17:04:45 policy-apex-pdp | client.id = consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2 17:04:45 policy-apex-pdp | client.rack = 17:04:45 policy-apex-pdp | connections.max.idle.ms = 540000 17:04:45 policy-apex-pdp | default.api.timeout.ms = 60000 17:04:45 policy-apex-pdp | enable.auto.commit = true 17:04:45 policy-apex-pdp | exclude.internal.topics = true 17:04:45 policy-apex-pdp | fetch.max.bytes = 52428800 17:04:45 policy-apex-pdp | fetch.max.wait.ms = 500 17:04:45 policy-apex-pdp | fetch.min.bytes = 1 17:04:45 policy-apex-pdp | group.id = 58d69853-23cf-4753-9963-1fc883efa8c8 17:04:45 policy-apex-pdp | group.instance.id = null 17:04:45 policy-apex-pdp | heartbeat.interval.ms = 3000 17:04:45 policy-apex-pdp | interceptor.classes = [] 17:04:45 policy-apex-pdp | internal.leave.group.on.close = true 17:04:45 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:45 policy-apex-pdp | isolation.level = read_uncommitted 17:04:45 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:45 policy-apex-pdp | max.partition.fetch.bytes = 1048576 17:04:45 policy-apex-pdp | max.poll.interval.ms = 300000 17:04:45 policy-apex-pdp | max.poll.records = 500 17:04:45 policy-apex-pdp | metadata.max.age.ms = 300000 17:04:45 policy-apex-pdp | metric.reporters = [] 17:04:45 policy-apex-pdp | metrics.num.samples = 2 17:04:45 policy-apex-pdp | metrics.recording.level = INFO 17:04:45 policy-apex-pdp | metrics.sample.window.ms = 30000 17:04:45 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:45 policy-apex-pdp | receive.buffer.bytes = 65536 17:04:45 policy-apex-pdp | reconnect.backoff.max.ms = 1000 17:04:45 policy-apex-pdp | reconnect.backoff.ms = 50 17:04:45 policy-apex-pdp | request.timeout.ms = 30000 17:04:45 policy-apex-pdp | retry.backoff.ms = 100 17:04:45 policy-apex-pdp | sasl.client.callback.handler.class = null 17:04:45 policy-apex-pdp | sasl.jaas.config = null 17:04:45 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:45 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 17:04:45 policy-apex-pdp | sasl.kerberos.service.name = null 17:04:45 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:45 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:45 policy-apex-pdp | sasl.login.callback.handler.class = null 17:04:45 policy-apex-pdp | sasl.login.class = null 17:04:45 policy-apex-pdp | sasl.login.connect.timeout.ms = null 17:04:45 policy-apex-pdp | sasl.login.read.timeout.ms = null 17:04:45 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 17:04:45 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 17:04:45 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 17:04:45 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 17:04:45 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 17:04:45 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 17:04:45 policy-apex-pdp | sasl.mechanism = GSSAPI 17:04:45 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 17:04:45 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 17:04:45 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 17:04:45 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:45 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:45 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:45 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 17:04:45 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 17:04:45 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 17:04:45 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 17:04:45 policy-apex-pdp | security.protocol = PLAINTEXT 17:04:45 policy-apex-pdp | security.providers = null 17:04:45 policy-apex-pdp | send.buffer.bytes = 131072 17:04:45 policy-apex-pdp | session.timeout.ms = 45000 17:04:45 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 17:04:45 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 17:04:45 policy-apex-pdp | ssl.cipher.suites = null 17:04:45 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:45 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 17:04:45 policy-apex-pdp | ssl.engine.factory.class = null 17:04:45 policy-apex-pdp | ssl.key.password = null 17:04:45 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 17:04:45 policy-apex-pdp | ssl.keystore.certificate.chain = null 17:04:45 policy-apex-pdp | ssl.keystore.key = null 17:04:45 policy-apex-pdp | ssl.keystore.location = null 17:04:45 policy-apex-pdp | ssl.keystore.password = null 17:04:45 policy-apex-pdp | ssl.keystore.type = JKS 17:04:45 policy-apex-pdp | ssl.protocol = TLSv1.3 17:04:45 policy-apex-pdp | ssl.provider = null 17:04:45 policy-apex-pdp | ssl.secure.random.implementation = null 17:04:45 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 17:04:45 policy-apex-pdp | ssl.truststore.certificates = null 17:04:45 policy-apex-pdp | ssl.truststore.location = null 17:04:45 policy-apex-pdp | ssl.truststore.password = null 17:04:45 policy-apex-pdp | ssl.truststore.type = JKS 17:04:45 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:45 policy-apex-pdp | 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.930+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.931+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.931+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1727629363930 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.931+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2, groupId=58d69853-23cf-4753-9963-1fc883efa8c8] Subscribed to topic(s): policy-pdp-pap 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.932+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d9843615-26f0-4762-8d03-99bce5f90442, alive=false, publisher=null]]: starting 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.947+00:00|INFO|ProducerConfig|main] ProducerConfig values: 17:04:45 policy-apex-pdp | acks = -1 17:04:45 policy-apex-pdp | auto.include.jmx.reporter = true 17:04:45 policy-apex-pdp | batch.size = 16384 17:04:45 policy-apex-pdp | bootstrap.servers = [kafka:9092] 17:04:45 policy-apex-pdp | buffer.memory = 33554432 17:04:45 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 17:04:45 policy-apex-pdp | client.id = producer-1 17:04:45 policy-apex-pdp | compression.type = none 17:04:45 policy-apex-pdp | connections.max.idle.ms = 540000 17:04:45 policy-apex-pdp | delivery.timeout.ms = 120000 17:04:45 policy-apex-pdp | enable.idempotence = true 17:04:45 policy-apex-pdp | interceptor.classes = [] 17:04:45 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:45 policy-apex-pdp | linger.ms = 0 17:04:45 policy-apex-pdp | max.block.ms = 60000 17:04:45 policy-apex-pdp | max.in.flight.requests.per.connection = 5 17:04:45 policy-apex-pdp | max.request.size = 1048576 17:04:45 policy-apex-pdp | metadata.max.age.ms = 300000 17:04:45 policy-apex-pdp | metadata.max.idle.ms = 300000 17:04:45 policy-apex-pdp | metric.reporters = [] 17:04:45 policy-apex-pdp | metrics.num.samples = 2 17:04:45 policy-apex-pdp | metrics.recording.level = INFO 17:04:45 policy-apex-pdp | metrics.sample.window.ms = 30000 17:04:45 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 17:04:45 policy-apex-pdp | partitioner.availability.timeout.ms = 0 17:04:45 policy-apex-pdp | partitioner.class = null 17:04:45 policy-apex-pdp | partitioner.ignore.keys = false 17:04:45 policy-apex-pdp | receive.buffer.bytes = 32768 17:04:45 policy-apex-pdp | reconnect.backoff.max.ms = 1000 17:04:45 policy-apex-pdp | reconnect.backoff.ms = 50 17:04:45 policy-apex-pdp | request.timeout.ms = 30000 17:04:45 policy-apex-pdp | retries = 2147483647 17:04:45 policy-apex-pdp | retry.backoff.ms = 100 17:04:45 policy-apex-pdp | sasl.client.callback.handler.class = null 17:04:45 policy-apex-pdp | sasl.jaas.config = null 17:04:45 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:45 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 17:04:45 policy-apex-pdp | sasl.kerberos.service.name = null 17:04:45 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:45 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:45 policy-apex-pdp | sasl.login.callback.handler.class = null 17:04:45 policy-apex-pdp | sasl.login.class = null 17:04:45 policy-apex-pdp | sasl.login.connect.timeout.ms = null 17:04:45 policy-apex-pdp | sasl.login.read.timeout.ms = null 17:04:45 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 17:04:45 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 17:04:45 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 17:04:45 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 17:04:45 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 17:04:45 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 17:04:45 policy-apex-pdp | sasl.mechanism = GSSAPI 17:04:45 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 17:04:45 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 17:04:45 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 17:04:45 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:45 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:45 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:45 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 17:04:45 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 17:04:45 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 17:04:45 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 17:04:45 policy-apex-pdp | security.protocol = PLAINTEXT 17:04:45 policy-apex-pdp | security.providers = null 17:04:45 policy-apex-pdp | send.buffer.bytes = 131072 17:04:45 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 17:04:45 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 17:04:45 policy-apex-pdp | ssl.cipher.suites = null 17:04:45 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:45 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 17:04:45 policy-apex-pdp | ssl.engine.factory.class = null 17:04:45 policy-apex-pdp | ssl.key.password = null 17:04:45 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 17:04:45 policy-apex-pdp | ssl.keystore.certificate.chain = null 17:04:45 policy-apex-pdp | ssl.keystore.key = null 17:04:45 policy-apex-pdp | ssl.keystore.location = null 17:04:45 policy-apex-pdp | ssl.keystore.password = null 17:04:45 policy-apex-pdp | ssl.keystore.type = JKS 17:04:45 policy-apex-pdp | ssl.protocol = TLSv1.3 17:04:45 policy-apex-pdp | ssl.provider = null 17:04:45 policy-apex-pdp | ssl.secure.random.implementation = null 17:04:45 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 17:04:45 policy-apex-pdp | ssl.truststore.certificates = null 17:04:45 policy-apex-pdp | ssl.truststore.location = null 17:04:45 policy-apex-pdp | ssl.truststore.password = null 17:04:45 policy-apex-pdp | ssl.truststore.type = JKS 17:04:45 policy-apex-pdp | transaction.timeout.ms = 60000 17:04:45 policy-apex-pdp | transactional.id = null 17:04:45 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:45 policy-apex-pdp | 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.964+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.986+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.987+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.987+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1727629363986 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.987+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d9843615-26f0-4762-8d03-99bce5f90442, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.987+00:00|INFO|ServiceManager|main] service manager starting set alive 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.987+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.990+00:00|INFO|ServiceManager|main] service manager starting topic sinks 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.991+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.993+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.993+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.994+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.994+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=58d69853-23cf-4753-9963-1fc883efa8c8, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.994+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=58d69853-23cf-4753-9963-1fc883efa8c8, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 17:04:45 policy-apex-pdp | [2024-09-29T17:02:43.995+00:00|INFO|ServiceManager|main] service manager starting Create REST server 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.009+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 17:04:45 policy-apex-pdp | [] 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.012+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 17:04:45 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7de34eb6-20e4-430e-92ba-5a3611c98514","timestampMs":1727629363994,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup"} 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.241+00:00|INFO|ServiceManager|main] service manager starting Rest Server 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.242+00:00|INFO|ServiceManager|main] service manager starting 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.242+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.242+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.259+00:00|INFO|ServiceManager|main] service manager started 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.259+00:00|INFO|ServiceManager|main] service manager started 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.259+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.259+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.355+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: o98Zehj3SPmhzfdRi49uhg 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.355+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2, groupId=58d69853-23cf-4753-9963-1fc883efa8c8] Cluster ID: o98Zehj3SPmhzfdRi49uhg 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.357+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.357+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2, groupId=58d69853-23cf-4753-9963-1fc883efa8c8] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.363+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2, groupId=58d69853-23cf-4753-9963-1fc883efa8c8] (Re-)joining group 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.381+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2, groupId=58d69853-23cf-4753-9963-1fc883efa8c8] Request joining group due to: need to re-join with the given member-id: consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2-5a134043-d934-4f04-a48e-7072330d0e62 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.382+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2, groupId=58d69853-23cf-4753-9963-1fc883efa8c8] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.382+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2, groupId=58d69853-23cf-4753-9963-1fc883efa8c8] (Re-)joining group 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.926+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 17:04:45 policy-apex-pdp | [2024-09-29T17:02:44.926+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 17:04:45 policy-apex-pdp | [2024-09-29T17:02:47.386+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2, groupId=58d69853-23cf-4753-9963-1fc883efa8c8] Successfully joined group with generation Generation{generationId=1, memberId='consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2-5a134043-d934-4f04-a48e-7072330d0e62', protocol='range'} 17:04:45 policy-apex-pdp | [2024-09-29T17:02:47.396+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2, groupId=58d69853-23cf-4753-9963-1fc883efa8c8] Finished assignment for group at generation 1: {consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2-5a134043-d934-4f04-a48e-7072330d0e62=Assignment(partitions=[policy-pdp-pap-0])} 17:04:45 policy-apex-pdp | [2024-09-29T17:02:47.403+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2, groupId=58d69853-23cf-4753-9963-1fc883efa8c8] Successfully synced group in generation Generation{generationId=1, memberId='consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2-5a134043-d934-4f04-a48e-7072330d0e62', protocol='range'} 17:04:45 policy-apex-pdp | [2024-09-29T17:02:47.403+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2, groupId=58d69853-23cf-4753-9963-1fc883efa8c8] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 17:04:45 policy-apex-pdp | [2024-09-29T17:02:47.405+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2, groupId=58d69853-23cf-4753-9963-1fc883efa8c8] Adding newly assigned partitions: policy-pdp-pap-0 17:04:45 policy-apex-pdp | [2024-09-29T17:02:47.411+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2, groupId=58d69853-23cf-4753-9963-1fc883efa8c8] Found no committed offset for partition policy-pdp-pap-0 17:04:45 policy-apex-pdp | [2024-09-29T17:02:47.420+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-58d69853-23cf-4753-9963-1fc883efa8c8-2, groupId=58d69853-23cf-4753-9963-1fc883efa8c8] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 17:04:45 policy-apex-pdp | [2024-09-29T17:02:56.179+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.2 - policyadmin [29/Sep/2024:17:02:56 +0000] "GET /metrics HTTP/1.1" 200 10644 "-" "Prometheus/2.54.1" 17:04:45 policy-apex-pdp | [2024-09-29T17:03:03.993+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 17:04:45 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d9828564-9921-47fa-a0d3-ffea5784e360","timestampMs":1727629383993,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup"} 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.027+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:45 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d9828564-9921-47fa-a0d3-ffea5784e360","timestampMs":1727629383993,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup"} 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.030+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.190+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:45 policy-apex-pdp | {"source":"pap-57c8d64d-d2db-46e1-966f-5ce1bae398d1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"b3e44c7c-f3cb-4a8d-826d-53df0b91c8af","timestampMs":1727629384127,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.201+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.201+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 17:04:45 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"eb9f646b-eaeb-4593-a057-2be0b938fd17","timestampMs":1727629384201,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup"} 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.204+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 17:04:45 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"b3e44c7c-f3cb-4a8d-826d-53df0b91c8af","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"78c9ebdf-5657-4c85-876f-d0b4a6324cf3","timestampMs":1727629384204,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.226+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:45 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"eb9f646b-eaeb-4593-a057-2be0b938fd17","timestampMs":1727629384201,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup"} 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.227+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.230+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:45 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"b3e44c7c-f3cb-4a8d-826d-53df0b91c8af","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"78c9ebdf-5657-4c85-876f-d0b4a6324cf3","timestampMs":1727629384204,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.230+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.283+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:45 policy-apex-pdp | {"source":"pap-57c8d64d-d2db-46e1-966f-5ce1bae398d1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"2215677e-d7a0-49a3-b961-0181a0078062","timestampMs":1727629384128,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.286+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 17:04:45 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"2215677e-d7a0-49a3-b961-0181a0078062","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"291487ea-7cae-4e80-b23a-3c9bb68e4950","timestampMs":1727629384285,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.295+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:45 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"2215677e-d7a0-49a3-b961-0181a0078062","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"291487ea-7cae-4e80-b23a-3c9bb68e4950","timestampMs":1727629384285,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.295+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.365+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:45 policy-apex-pdp | {"source":"pap-57c8d64d-d2db-46e1-966f-5ce1bae398d1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"5899912b-3d11-43b6-a357-a6d8cc275175","timestampMs":1727629384339,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.366+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 17:04:45 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"5899912b-3d11-43b6-a357-a6d8cc275175","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"86c2bf53-fb39-491c-a8d8-b8bc859fd94e","timestampMs":1727629384366,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.377+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:45 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"5899912b-3d11-43b6-a357-a6d8cc275175","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"86c2bf53-fb39-491c-a8d8-b8bc859fd94e","timestampMs":1727629384366,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-apex-pdp | [2024-09-29T17:03:04.377+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:04:45 policy-apex-pdp | [2024-09-29T17:03:56.086+00:00|INFO|RequestLog|qtp739264372-28] 172.17.0.2 - policyadmin [29/Sep/2024:17:03:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.54.1" 17:04:45 =================================== 17:04:45 ======== Logs from api ======== 17:04:45 policy-api | Waiting for mariadb port 3306... 17:04:45 policy-api | mariadb (172.17.0.5:3306) open 17:04:45 policy-api | Waiting for policy-db-migrator port 6824... 17:04:45 policy-api | policy-db-migrator (172.17.0.8:6824) open 17:04:45 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 17:04:45 policy-api | 17:04:45 policy-api | . ____ _ __ _ _ 17:04:45 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 17:04:45 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 17:04:45 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 17:04:45 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 17:04:45 policy-api | =========|_|==============|___/=/_/_/_/ 17:04:45 policy-api | :: Spring Boot :: (v3.1.10) 17:04:45 policy-api | 17:04:45 policy-api | [2024-09-29T17:02:18.395+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 17:04:45 policy-api | [2024-09-29T17:02:18.464+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 21 (/app/api.jar started by policy in /opt/app/policy/api/bin) 17:04:45 policy-api | [2024-09-29T17:02:18.466+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 17:04:45 policy-api | [2024-09-29T17:02:20.578+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 17:04:45 policy-api | [2024-09-29T17:02:20.670+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 81 ms. Found 6 JPA repository interfaces. 17:04:45 policy-api | [2024-09-29T17:02:21.140+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 17:04:45 policy-api | [2024-09-29T17:02:21.140+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 17:04:45 policy-api | [2024-09-29T17:02:21.838+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 17:04:45 policy-api | [2024-09-29T17:02:21.849+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 17:04:45 policy-api | [2024-09-29T17:02:21.855+00:00|INFO|StandardService|main] Starting service [Tomcat] 17:04:45 policy-api | [2024-09-29T17:02:21.855+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 17:04:45 policy-api | [2024-09-29T17:02:21.991+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 17:04:45 policy-api | [2024-09-29T17:02:21.992+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3444 ms 17:04:45 policy-api | [2024-09-29T17:02:22.469+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 17:04:45 policy-api | [2024-09-29T17:02:22.546+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final 17:04:45 policy-api | [2024-09-29T17:02:22.596+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 17:04:45 policy-api | [2024-09-29T17:02:22.885+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 17:04:45 policy-api | [2024-09-29T17:02:22.916+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 17:04:45 policy-api | [2024-09-29T17:02:23.008+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@26844abb 17:04:45 policy-api | [2024-09-29T17:02:23.010+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 17:04:45 policy-api | [2024-09-29T17:02:25.087+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 17:04:45 policy-api | [2024-09-29T17:02:25.090+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 17:04:45 policy-api | [2024-09-29T17:02:26.229+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 17:04:45 policy-api | [2024-09-29T17:02:27.098+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 17:04:45 policy-api | [2024-09-29T17:02:28.234+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 17:04:45 policy-api | [2024-09-29T17:02:28.461+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@7ac47e87, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6e0a9752, org.springframework.security.web.context.SecurityContextHolderFilter@4743220d, org.springframework.security.web.header.HeaderWriterFilter@56584f06, org.springframework.security.web.authentication.logout.LogoutFilter@463bdee9, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@203f1447, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@5c5e301f, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@74355746, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@5977f3d6, org.springframework.security.web.access.ExceptionTranslationFilter@3033e54c, org.springframework.security.web.access.intercept.AuthorizationFilter@1b5593e6] 17:04:45 policy-api | [2024-09-29T17:02:29.372+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 17:04:45 policy-api | [2024-09-29T17:02:29.475+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 17:04:45 policy-api | [2024-09-29T17:02:29.494+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 17:04:45 policy-api | [2024-09-29T17:02:29.511+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.833 seconds (process running for 12.483) 17:04:45 policy-api | [2024-09-29T17:02:39.930+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 17:04:45 policy-api | [2024-09-29T17:02:39.930+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 17:04:45 policy-api | [2024-09-29T17:02:39.932+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 17:04:45 policy-api | [2024-09-29T17:03:13.800+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: 17:04:45 policy-api | [] 17:04:45 =================================== 17:04:45 ======== Logs from csit-tests ======== 17:04:45 policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot 17:04:45 policy-csit | Run Robot test 17:04:45 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 17:04:45 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 17:04:45 policy-csit | -v POLICY_API_IP:policy-api:6969 17:04:45 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 17:04:45 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 17:04:45 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 17:04:45 policy-csit | -v APEX_IP:policy-apex-pdp:6969 17:04:45 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 17:04:45 policy-csit | -v KAFKA_IP:kafka:9092 17:04:45 policy-csit | -v PROMETHEUS_IP:prometheus:9090 17:04:45 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 17:04:45 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 17:04:45 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 17:04:45 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 17:04:45 policy-csit | -v TEMP_FOLDER:/tmp/distribution 17:04:45 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 17:04:45 policy-csit | -v CLAMP_K8S_TEST: 17:04:45 policy-csit | Starting Robot test suites ... 17:04:45 policy-csit | ============================================================================== 17:04:45 policy-csit | Pap-Test & Pap-Slas 17:04:45 policy-csit | ============================================================================== 17:04:45 policy-csit | Pap-Test & Pap-Slas.Pap-Test 17:04:45 policy-csit | ============================================================================== 17:04:45 policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | Healthcheck :: Verify policy pap health check | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | 17:04:45 policy-csit | 22 tests, 22 passed, 0 failed 17:04:45 policy-csit | ============================================================================== 17:04:45 policy-csit | Pap-Test & Pap-Slas.Pap-Slas 17:04:45 policy-csit | ============================================================================== 17:04:45 policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 17:04:45 policy-csit | ------------------------------------------------------------------------------ 17:04:45 policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | 17:04:45 policy-csit | 8 tests, 8 passed, 0 failed 17:04:45 policy-csit | ============================================================================== 17:04:45 policy-csit | Pap-Test & Pap-Slas | PASS | 17:04:45 policy-csit | 30 tests, 30 passed, 0 failed 17:04:45 policy-csit | ============================================================================== 17:04:45 policy-csit | Output: /tmp/results/output.xml 17:04:45 policy-csit | Log: /tmp/results/log.html 17:04:45 policy-csit | Report: /tmp/results/report.html 17:04:45 policy-csit | RESULT: 0 17:04:45 =================================== 17:04:45 ======== Logs from policy-db-migrator ======== 17:04:45 policy-db-migrator | Waiting for mariadb port 3306... 17:04:45 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 17:04:45 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 17:04:45 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 17:04:45 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 17:04:45 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 17:04:45 policy-db-migrator | Connection to mariadb (172.17.0.5) 3306 port [tcp/mysql] succeeded! 17:04:45 policy-db-migrator | 321 blocks 17:04:45 policy-db-migrator | Preparing upgrade release version: 0800 17:04:45 policy-db-migrator | Preparing upgrade release version: 0900 17:04:45 policy-db-migrator | Preparing upgrade release version: 1000 17:04:45 policy-db-migrator | Preparing upgrade release version: 1100 17:04:45 policy-db-migrator | Preparing upgrade release version: 1200 17:04:45 policy-db-migrator | Preparing upgrade release version: 1300 17:04:45 policy-db-migrator | Done 17:04:45 policy-db-migrator | name version 17:04:45 policy-db-migrator | policyadmin 0 17:04:45 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 17:04:45 policy-db-migrator | upgrade: 0 -> 1300 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0450-pdpgroup.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0470-pdp.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0570-toscadatatype.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0630-toscanodetype.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0660-toscaparameter.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0670-toscapolicies.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0690-toscapolicy.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0730-toscaproperty.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0770-toscarequirement.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0780-toscarequirements.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0820-toscatrigger.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0100-pdp.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 17:04:45 policy-db-migrator | JOIN pdpstatistics b 17:04:45 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 17:04:45 policy-db-migrator | SET a.id = b.id 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0210-sequence.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0220-sequence.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0120-toscatrigger.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0140-toscaparameter.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0150-toscaproperty.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0100-upgrade.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | select 'upgrade to 1100 completed' as msg 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | msg 17:04:45 policy-db-migrator | upgrade to 1100 completed 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0120-audit_sequence.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | TRUNCATE TABLE sequence 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | DROP TABLE pdpstatistics 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | DROP TABLE statistics_sequence 17:04:45 policy-db-migrator | -------------- 17:04:45 policy-db-migrator | 17:04:45 policy-db-migrator | policyadmin: OK: upgrade (1300) 17:04:45 policy-db-migrator | name version 17:04:45 policy-db-migrator | policyadmin 1300 17:04:45 policy-db-migrator | ID script operation from_version to_version tag success atTime 17:04:45 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:10 17:04:45 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:10 17:04:45 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:10 17:04:45 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:10 17:04:45 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:10 17:04:45 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:10 17:04:45 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:10 17:04:45 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:10 17:04:45 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:10 17:04:45 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:10 17:04:45 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:10 17:04:45 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:10 17:04:45 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:11 17:04:45 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:12 17:04:45 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:13 17:04:45 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:14 17:04:45 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:14 17:04:45 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:14 17:04:45 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:14 17:04:45 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:14 17:04:45 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:14 17:04:45 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:14 17:04:45 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:14 17:04:45 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:14 17:04:45 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:14 17:04:45 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:14 17:04:45 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:14 17:04:45 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:14 17:04:45 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:14 17:04:45 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2909241702100800u 1 2024-09-29 17:02:14 17:04:45 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2909241702100900u 1 2024-09-29 17:02:14 17:04:45 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2909241702100900u 1 2024-09-29 17:02:14 17:04:45 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2909241702100900u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2909241702100900u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2909241702100900u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2909241702100900u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2909241702100900u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2909241702100900u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2909241702100900u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2909241702100900u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2909241702100900u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2909241702100900u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2909241702100900u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2909241702101000u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2909241702101000u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2909241702101000u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2909241702101000u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2909241702101000u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2909241702101000u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2909241702101000u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2909241702101000u 1 2024-09-29 17:02:15 17:04:45 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2909241702101000u 1 2024-09-29 17:02:16 17:04:45 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2909241702101100u 1 2024-09-29 17:02:16 17:04:45 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2909241702101200u 1 2024-09-29 17:02:16 17:04:45 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2909241702101200u 1 2024-09-29 17:02:16 17:04:45 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2909241702101200u 1 2024-09-29 17:02:16 17:04:45 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2909241702101200u 1 2024-09-29 17:02:16 17:04:45 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2909241702101300u 1 2024-09-29 17:02:16 17:04:45 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2909241702101300u 1 2024-09-29 17:02:16 17:04:45 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2909241702101300u 1 2024-09-29 17:02:16 17:04:45 policy-db-migrator | policyadmin: OK @ 1300 17:04:45 =================================== 17:04:45 ======== Logs from pap ======== 17:04:45 policy-pap | Waiting for mariadb port 3306... 17:04:45 policy-pap | mariadb (172.17.0.5:3306) open 17:04:45 policy-pap | Waiting for kafka port 9092... 17:04:45 policy-pap | kafka (172.17.0.7:9092) open 17:04:45 policy-pap | Waiting for api port 6969... 17:04:45 policy-pap | api (172.17.0.9:6969) open 17:04:45 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 17:04:45 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 17:04:45 policy-pap | 17:04:45 policy-pap | . ____ _ __ _ _ 17:04:45 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 17:04:45 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 17:04:45 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 17:04:45 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 17:04:45 policy-pap | =========|_|==============|___/=/_/_/_/ 17:04:45 policy-pap | :: Spring Boot :: (v3.1.10) 17:04:45 policy-pap | 17:04:45 policy-pap | [2024-09-29T17:02:32.196+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 17:04:45 policy-pap | [2024-09-29T17:02:32.267+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 34 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 17:04:45 policy-pap | [2024-09-29T17:02:32.268+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 17:04:45 policy-pap | [2024-09-29T17:02:34.408+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 17:04:45 policy-pap | [2024-09-29T17:02:34.514+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 95 ms. Found 7 JPA repository interfaces. 17:04:45 policy-pap | [2024-09-29T17:02:35.008+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 17:04:45 policy-pap | [2024-09-29T17:02:35.008+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 17:04:45 policy-pap | [2024-09-29T17:02:35.679+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 17:04:45 policy-pap | [2024-09-29T17:02:35.691+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 17:04:45 policy-pap | [2024-09-29T17:02:35.694+00:00|INFO|StandardService|main] Starting service [Tomcat] 17:04:45 policy-pap | [2024-09-29T17:02:35.694+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 17:04:45 policy-pap | [2024-09-29T17:02:35.795+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 17:04:45 policy-pap | [2024-09-29T17:02:35.795+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3447 ms 17:04:45 policy-pap | [2024-09-29T17:02:36.232+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 17:04:45 policy-pap | [2024-09-29T17:02:36.284+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final 17:04:45 policy-pap | [2024-09-29T17:02:36.602+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 17:04:45 policy-pap | [2024-09-29T17:02:36.703+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@72f8ae0c 17:04:45 policy-pap | [2024-09-29T17:02:36.705+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 17:04:45 policy-pap | [2024-09-29T17:02:36.739+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect 17:04:45 policy-pap | [2024-09-29T17:02:38.395+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 17:04:45 policy-pap | [2024-09-29T17:02:38.410+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 17:04:45 policy-pap | [2024-09-29T17:02:38.929+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 17:04:45 policy-pap | [2024-09-29T17:02:39.358+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 17:04:45 policy-pap | [2024-09-29T17:02:39.487+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 17:04:45 policy-pap | [2024-09-29T17:02:39.795+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:45 policy-pap | allow.auto.create.topics = true 17:04:45 policy-pap | auto.commit.interval.ms = 5000 17:04:45 policy-pap | auto.include.jmx.reporter = true 17:04:45 policy-pap | auto.offset.reset = latest 17:04:45 policy-pap | bootstrap.servers = [kafka:9092] 17:04:45 policy-pap | check.crcs = true 17:04:45 policy-pap | client.dns.lookup = use_all_dns_ips 17:04:45 policy-pap | client.id = consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-1 17:04:45 policy-pap | client.rack = 17:04:45 policy-pap | connections.max.idle.ms = 540000 17:04:45 policy-pap | default.api.timeout.ms = 60000 17:04:45 policy-pap | enable.auto.commit = true 17:04:45 policy-pap | exclude.internal.topics = true 17:04:45 policy-pap | fetch.max.bytes = 52428800 17:04:45 policy-pap | fetch.max.wait.ms = 500 17:04:45 policy-pap | fetch.min.bytes = 1 17:04:45 policy-pap | group.id = 2cd2bf8c-cde2-4801-93ac-009d1b720a1d 17:04:45 policy-pap | group.instance.id = null 17:04:45 policy-pap | heartbeat.interval.ms = 3000 17:04:45 policy-pap | interceptor.classes = [] 17:04:45 policy-pap | internal.leave.group.on.close = true 17:04:45 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:45 policy-pap | isolation.level = read_uncommitted 17:04:45 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:45 policy-pap | max.partition.fetch.bytes = 1048576 17:04:45 policy-pap | max.poll.interval.ms = 300000 17:04:45 policy-pap | max.poll.records = 500 17:04:45 policy-pap | metadata.max.age.ms = 300000 17:04:45 policy-pap | metric.reporters = [] 17:04:45 policy-pap | metrics.num.samples = 2 17:04:45 policy-pap | metrics.recording.level = INFO 17:04:45 policy-pap | metrics.sample.window.ms = 30000 17:04:45 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:45 policy-pap | receive.buffer.bytes = 65536 17:04:45 policy-pap | reconnect.backoff.max.ms = 1000 17:04:45 policy-pap | reconnect.backoff.ms = 50 17:04:45 policy-pap | request.timeout.ms = 30000 17:04:45 policy-pap | retry.backoff.ms = 100 17:04:45 policy-pap | sasl.client.callback.handler.class = null 17:04:45 policy-pap | sasl.jaas.config = null 17:04:45 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:45 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:04:45 policy-pap | sasl.kerberos.service.name = null 17:04:45 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:45 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:45 policy-pap | sasl.login.callback.handler.class = null 17:04:45 policy-pap | sasl.login.class = null 17:04:45 policy-pap | sasl.login.connect.timeout.ms = null 17:04:45 policy-pap | sasl.login.read.timeout.ms = null 17:04:45 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:04:45 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:04:45 policy-pap | sasl.login.refresh.window.factor = 0.8 17:04:45 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:04:45 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:04:45 policy-pap | sasl.login.retry.backoff.ms = 100 17:04:45 policy-pap | sasl.mechanism = GSSAPI 17:04:45 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:04:45 policy-pap | sasl.oauthbearer.expected.audience = null 17:04:45 policy-pap | sasl.oauthbearer.expected.issuer = null 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:04:45 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:04:45 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:04:45 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:04:45 policy-pap | security.protocol = PLAINTEXT 17:04:45 policy-pap | security.providers = null 17:04:45 policy-pap | send.buffer.bytes = 131072 17:04:45 policy-pap | session.timeout.ms = 45000 17:04:45 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:04:45 policy-pap | socket.connection.setup.timeout.ms = 10000 17:04:45 policy-pap | ssl.cipher.suites = null 17:04:45 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:45 policy-pap | ssl.endpoint.identification.algorithm = https 17:04:45 policy-pap | ssl.engine.factory.class = null 17:04:45 policy-pap | ssl.key.password = null 17:04:45 policy-pap | ssl.keymanager.algorithm = SunX509 17:04:45 policy-pap | ssl.keystore.certificate.chain = null 17:04:45 policy-pap | ssl.keystore.key = null 17:04:45 policy-pap | ssl.keystore.location = null 17:04:45 policy-pap | ssl.keystore.password = null 17:04:45 policy-pap | ssl.keystore.type = JKS 17:04:45 policy-pap | ssl.protocol = TLSv1.3 17:04:45 policy-pap | ssl.provider = null 17:04:45 policy-pap | ssl.secure.random.implementation = null 17:04:45 policy-pap | ssl.trustmanager.algorithm = PKIX 17:04:45 policy-pap | ssl.truststore.certificates = null 17:04:45 policy-pap | ssl.truststore.location = null 17:04:45 policy-pap | ssl.truststore.password = null 17:04:45 policy-pap | ssl.truststore.type = JKS 17:04:45 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:45 policy-pap | 17:04:45 policy-pap | [2024-09-29T17:02:39.987+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:45 policy-pap | [2024-09-29T17:02:39.988+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:45 policy-pap | [2024-09-29T17:02:39.989+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1727629359986 17:04:45 policy-pap | [2024-09-29T17:02:39.992+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-1, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Subscribed to topic(s): policy-pdp-pap 17:04:45 policy-pap | [2024-09-29T17:02:39.994+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:45 policy-pap | allow.auto.create.topics = true 17:04:45 policy-pap | auto.commit.interval.ms = 5000 17:04:45 policy-pap | auto.include.jmx.reporter = true 17:04:45 policy-pap | auto.offset.reset = latest 17:04:45 policy-pap | bootstrap.servers = [kafka:9092] 17:04:45 policy-pap | check.crcs = true 17:04:45 policy-pap | client.dns.lookup = use_all_dns_ips 17:04:45 policy-pap | client.id = consumer-policy-pap-2 17:04:45 policy-pap | client.rack = 17:04:45 policy-pap | connections.max.idle.ms = 540000 17:04:45 policy-pap | default.api.timeout.ms = 60000 17:04:45 policy-pap | enable.auto.commit = true 17:04:45 policy-pap | exclude.internal.topics = true 17:04:45 policy-pap | fetch.max.bytes = 52428800 17:04:45 policy-pap | fetch.max.wait.ms = 500 17:04:45 policy-pap | fetch.min.bytes = 1 17:04:45 policy-pap | group.id = policy-pap 17:04:45 policy-pap | group.instance.id = null 17:04:45 policy-pap | heartbeat.interval.ms = 3000 17:04:45 policy-pap | interceptor.classes = [] 17:04:45 policy-pap | internal.leave.group.on.close = true 17:04:45 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:45 policy-pap | isolation.level = read_uncommitted 17:04:45 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:45 policy-pap | max.partition.fetch.bytes = 1048576 17:04:45 policy-pap | max.poll.interval.ms = 300000 17:04:45 policy-pap | max.poll.records = 500 17:04:45 policy-pap | metadata.max.age.ms = 300000 17:04:45 policy-pap | metric.reporters = [] 17:04:45 policy-pap | metrics.num.samples = 2 17:04:45 policy-pap | metrics.recording.level = INFO 17:04:45 policy-pap | metrics.sample.window.ms = 30000 17:04:45 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:45 policy-pap | receive.buffer.bytes = 65536 17:04:45 policy-pap | reconnect.backoff.max.ms = 1000 17:04:45 policy-pap | reconnect.backoff.ms = 50 17:04:45 policy-pap | request.timeout.ms = 30000 17:04:45 policy-pap | retry.backoff.ms = 100 17:04:45 policy-pap | sasl.client.callback.handler.class = null 17:04:45 policy-pap | sasl.jaas.config = null 17:04:45 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:45 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:04:45 policy-pap | sasl.kerberos.service.name = null 17:04:45 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:45 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:45 policy-pap | sasl.login.callback.handler.class = null 17:04:45 policy-pap | sasl.login.class = null 17:04:45 policy-pap | sasl.login.connect.timeout.ms = null 17:04:45 policy-pap | sasl.login.read.timeout.ms = null 17:04:45 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:04:45 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:04:45 policy-pap | sasl.login.refresh.window.factor = 0.8 17:04:45 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:04:45 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:04:45 policy-pap | sasl.login.retry.backoff.ms = 100 17:04:45 policy-pap | sasl.mechanism = GSSAPI 17:04:45 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:04:45 policy-pap | sasl.oauthbearer.expected.audience = null 17:04:45 policy-pap | sasl.oauthbearer.expected.issuer = null 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:04:45 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:04:45 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:04:45 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:04:45 policy-pap | security.protocol = PLAINTEXT 17:04:45 policy-pap | security.providers = null 17:04:45 policy-pap | send.buffer.bytes = 131072 17:04:45 policy-pap | session.timeout.ms = 45000 17:04:45 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:04:45 policy-pap | socket.connection.setup.timeout.ms = 10000 17:04:45 policy-pap | ssl.cipher.suites = null 17:04:45 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:45 policy-pap | ssl.endpoint.identification.algorithm = https 17:04:45 policy-pap | ssl.engine.factory.class = null 17:04:45 policy-pap | ssl.key.password = null 17:04:45 policy-pap | ssl.keymanager.algorithm = SunX509 17:04:45 policy-pap | ssl.keystore.certificate.chain = null 17:04:45 policy-pap | ssl.keystore.key = null 17:04:45 policy-pap | ssl.keystore.location = null 17:04:45 policy-pap | ssl.keystore.password = null 17:04:45 policy-pap | ssl.keystore.type = JKS 17:04:45 policy-pap | ssl.protocol = TLSv1.3 17:04:45 policy-pap | ssl.provider = null 17:04:45 policy-pap | ssl.secure.random.implementation = null 17:04:45 policy-pap | ssl.trustmanager.algorithm = PKIX 17:04:45 policy-pap | ssl.truststore.certificates = null 17:04:45 policy-pap | ssl.truststore.location = null 17:04:45 policy-pap | ssl.truststore.password = null 17:04:45 policy-pap | ssl.truststore.type = JKS 17:04:45 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:45 policy-pap | 17:04:45 policy-pap | [2024-09-29T17:02:40.009+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:45 policy-pap | [2024-09-29T17:02:40.009+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:45 policy-pap | [2024-09-29T17:02:40.009+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1727629360009 17:04:45 policy-pap | [2024-09-29T17:02:40.010+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 17:04:45 policy-pap | [2024-09-29T17:02:40.458+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 17:04:45 policy-pap | [2024-09-29T17:02:40.622+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 17:04:45 policy-pap | [2024-09-29T17:02:40.858+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@44da745f, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@2435c6ae, org.springframework.security.web.context.SecurityContextHolderFilter@29dfc68f, org.springframework.security.web.header.HeaderWriterFilter@22172b00, org.springframework.security.web.authentication.logout.LogoutFilter@912747d, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4fd63c43, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@333a2df2, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@8c18bde, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@574f9e36, org.springframework.security.web.access.ExceptionTranslationFilter@24d0c6a4, org.springframework.security.web.access.intercept.AuthorizationFilter@6b630d4b] 17:04:45 policy-pap | [2024-09-29T17:02:41.740+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 17:04:45 policy-pap | [2024-09-29T17:02:41.863+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 17:04:45 policy-pap | [2024-09-29T17:02:41.883+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 17:04:45 policy-pap | [2024-09-29T17:02:41.902+00:00|INFO|ServiceManager|main] Policy PAP starting 17:04:45 policy-pap | [2024-09-29T17:02:41.902+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 17:04:45 policy-pap | [2024-09-29T17:02:41.903+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 17:04:45 policy-pap | [2024-09-29T17:02:41.903+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 17:04:45 policy-pap | [2024-09-29T17:02:41.904+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 17:04:45 policy-pap | [2024-09-29T17:02:41.904+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 17:04:45 policy-pap | [2024-09-29T17:02:41.904+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 17:04:45 policy-pap | [2024-09-29T17:02:41.906+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2cd2bf8c-cde2-4801-93ac-009d1b720a1d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@160e45c8 17:04:45 policy-pap | [2024-09-29T17:02:41.920+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2cd2bf8c-cde2-4801-93ac-009d1b720a1d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 17:04:45 policy-pap | [2024-09-29T17:02:41.921+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:45 policy-pap | allow.auto.create.topics = true 17:04:45 policy-pap | auto.commit.interval.ms = 5000 17:04:45 policy-pap | auto.include.jmx.reporter = true 17:04:45 policy-pap | auto.offset.reset = latest 17:04:45 policy-pap | bootstrap.servers = [kafka:9092] 17:04:45 policy-pap | check.crcs = true 17:04:45 policy-pap | client.dns.lookup = use_all_dns_ips 17:04:45 policy-pap | client.id = consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3 17:04:45 policy-pap | client.rack = 17:04:45 policy-pap | connections.max.idle.ms = 540000 17:04:45 policy-pap | default.api.timeout.ms = 60000 17:04:45 policy-pap | enable.auto.commit = true 17:04:45 policy-pap | exclude.internal.topics = true 17:04:45 policy-pap | fetch.max.bytes = 52428800 17:04:45 policy-pap | fetch.max.wait.ms = 500 17:04:45 policy-pap | fetch.min.bytes = 1 17:04:45 policy-pap | group.id = 2cd2bf8c-cde2-4801-93ac-009d1b720a1d 17:04:45 policy-pap | group.instance.id = null 17:04:45 policy-pap | heartbeat.interval.ms = 3000 17:04:45 policy-pap | interceptor.classes = [] 17:04:45 policy-pap | internal.leave.group.on.close = true 17:04:45 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:45 policy-pap | isolation.level = read_uncommitted 17:04:45 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:45 policy-pap | max.partition.fetch.bytes = 1048576 17:04:45 policy-pap | max.poll.interval.ms = 300000 17:04:45 policy-pap | max.poll.records = 500 17:04:45 policy-pap | metadata.max.age.ms = 300000 17:04:45 policy-pap | metric.reporters = [] 17:04:45 policy-pap | metrics.num.samples = 2 17:04:45 policy-pap | metrics.recording.level = INFO 17:04:45 policy-pap | metrics.sample.window.ms = 30000 17:04:45 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:45 policy-pap | receive.buffer.bytes = 65536 17:04:45 policy-pap | reconnect.backoff.max.ms = 1000 17:04:45 policy-pap | reconnect.backoff.ms = 50 17:04:45 policy-pap | request.timeout.ms = 30000 17:04:45 policy-pap | retry.backoff.ms = 100 17:04:45 policy-pap | sasl.client.callback.handler.class = null 17:04:45 policy-pap | sasl.jaas.config = null 17:04:45 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:45 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:04:45 policy-pap | sasl.kerberos.service.name = null 17:04:45 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:45 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:45 policy-pap | sasl.login.callback.handler.class = null 17:04:45 policy-pap | sasl.login.class = null 17:04:45 policy-pap | sasl.login.connect.timeout.ms = null 17:04:45 policy-pap | sasl.login.read.timeout.ms = null 17:04:45 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:04:45 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:04:45 policy-pap | sasl.login.refresh.window.factor = 0.8 17:04:45 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:04:45 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:04:45 policy-pap | sasl.login.retry.backoff.ms = 100 17:04:45 policy-pap | sasl.mechanism = GSSAPI 17:04:45 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:04:45 policy-pap | sasl.oauthbearer.expected.audience = null 17:04:45 policy-pap | sasl.oauthbearer.expected.issuer = null 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:04:45 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:04:45 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:04:45 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:04:45 policy-pap | security.protocol = PLAINTEXT 17:04:45 policy-pap | security.providers = null 17:04:45 policy-pap | send.buffer.bytes = 131072 17:04:45 policy-pap | session.timeout.ms = 45000 17:04:45 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:04:45 policy-pap | socket.connection.setup.timeout.ms = 10000 17:04:45 policy-pap | ssl.cipher.suites = null 17:04:45 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:45 policy-pap | ssl.endpoint.identification.algorithm = https 17:04:45 policy-pap | ssl.engine.factory.class = null 17:04:45 policy-pap | ssl.key.password = null 17:04:45 policy-pap | ssl.keymanager.algorithm = SunX509 17:04:45 policy-pap | ssl.keystore.certificate.chain = null 17:04:45 policy-pap | ssl.keystore.key = null 17:04:45 policy-pap | ssl.keystore.location = null 17:04:45 policy-pap | ssl.keystore.password = null 17:04:45 policy-pap | ssl.keystore.type = JKS 17:04:45 policy-pap | ssl.protocol = TLSv1.3 17:04:45 policy-pap | ssl.provider = null 17:04:45 policy-pap | ssl.secure.random.implementation = null 17:04:45 policy-pap | ssl.trustmanager.algorithm = PKIX 17:04:45 policy-pap | ssl.truststore.certificates = null 17:04:45 policy-pap | ssl.truststore.location = null 17:04:45 policy-pap | ssl.truststore.password = null 17:04:45 policy-pap | ssl.truststore.type = JKS 17:04:45 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:45 policy-pap | 17:04:45 policy-pap | [2024-09-29T17:02:41.925+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:45 policy-pap | [2024-09-29T17:02:41.925+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:45 policy-pap | [2024-09-29T17:02:41.925+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1727629361925 17:04:45 policy-pap | [2024-09-29T17:02:41.925+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Subscribed to topic(s): policy-pdp-pap 17:04:45 policy-pap | [2024-09-29T17:02:41.925+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 17:04:45 policy-pap | [2024-09-29T17:02:41.926+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=68a5f30d-402e-4bec-9c3d-83f6a398af54, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4a68cbc5 17:04:45 policy-pap | [2024-09-29T17:02:41.926+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=68a5f30d-402e-4bec-9c3d-83f6a398af54, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 17:04:45 policy-pap | [2024-09-29T17:02:41.926+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:04:45 policy-pap | allow.auto.create.topics = true 17:04:45 policy-pap | auto.commit.interval.ms = 5000 17:04:45 policy-pap | auto.include.jmx.reporter = true 17:04:45 policy-pap | auto.offset.reset = latest 17:04:45 policy-pap | bootstrap.servers = [kafka:9092] 17:04:45 policy-pap | check.crcs = true 17:04:45 policy-pap | client.dns.lookup = use_all_dns_ips 17:04:45 policy-pap | client.id = consumer-policy-pap-4 17:04:45 policy-pap | client.rack = 17:04:45 policy-pap | connections.max.idle.ms = 540000 17:04:45 policy-pap | default.api.timeout.ms = 60000 17:04:45 policy-pap | enable.auto.commit = true 17:04:45 policy-pap | exclude.internal.topics = true 17:04:45 policy-pap | fetch.max.bytes = 52428800 17:04:45 policy-pap | fetch.max.wait.ms = 500 17:04:45 policy-pap | fetch.min.bytes = 1 17:04:45 policy-pap | group.id = policy-pap 17:04:45 policy-pap | group.instance.id = null 17:04:45 policy-pap | heartbeat.interval.ms = 3000 17:04:45 policy-pap | interceptor.classes = [] 17:04:45 policy-pap | internal.leave.group.on.close = true 17:04:45 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 17:04:45 policy-pap | isolation.level = read_uncommitted 17:04:45 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:45 policy-pap | max.partition.fetch.bytes = 1048576 17:04:45 policy-pap | max.poll.interval.ms = 300000 17:04:45 policy-pap | max.poll.records = 500 17:04:45 policy-pap | metadata.max.age.ms = 300000 17:04:45 policy-pap | metric.reporters = [] 17:04:45 policy-pap | metrics.num.samples = 2 17:04:45 policy-pap | metrics.recording.level = INFO 17:04:45 policy-pap | metrics.sample.window.ms = 30000 17:04:45 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:04:45 policy-pap | receive.buffer.bytes = 65536 17:04:45 policy-pap | reconnect.backoff.max.ms = 1000 17:04:45 policy-pap | reconnect.backoff.ms = 50 17:04:45 policy-pap | request.timeout.ms = 30000 17:04:45 policy-pap | retry.backoff.ms = 100 17:04:45 policy-pap | sasl.client.callback.handler.class = null 17:04:45 policy-pap | sasl.jaas.config = null 17:04:45 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:45 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:04:45 policy-pap | sasl.kerberos.service.name = null 17:04:45 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:45 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:45 policy-pap | sasl.login.callback.handler.class = null 17:04:45 policy-pap | sasl.login.class = null 17:04:45 policy-pap | sasl.login.connect.timeout.ms = null 17:04:45 policy-pap | sasl.login.read.timeout.ms = null 17:04:45 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:04:45 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:04:45 policy-pap | sasl.login.refresh.window.factor = 0.8 17:04:45 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:04:45 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:04:45 policy-pap | sasl.login.retry.backoff.ms = 100 17:04:45 policy-pap | sasl.mechanism = GSSAPI 17:04:45 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:04:45 policy-pap | sasl.oauthbearer.expected.audience = null 17:04:45 policy-pap | sasl.oauthbearer.expected.issuer = null 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:04:45 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:04:45 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:04:45 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:04:45 policy-pap | security.protocol = PLAINTEXT 17:04:45 policy-pap | security.providers = null 17:04:45 policy-pap | send.buffer.bytes = 131072 17:04:45 policy-pap | session.timeout.ms = 45000 17:04:45 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:04:45 policy-pap | socket.connection.setup.timeout.ms = 10000 17:04:45 policy-pap | ssl.cipher.suites = null 17:04:45 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:45 policy-pap | ssl.endpoint.identification.algorithm = https 17:04:45 policy-pap | ssl.engine.factory.class = null 17:04:45 policy-pap | ssl.key.password = null 17:04:45 policy-pap | ssl.keymanager.algorithm = SunX509 17:04:45 policy-pap | ssl.keystore.certificate.chain = null 17:04:45 policy-pap | ssl.keystore.key = null 17:04:45 policy-pap | ssl.keystore.location = null 17:04:45 policy-pap | ssl.keystore.password = null 17:04:45 policy-pap | ssl.keystore.type = JKS 17:04:45 policy-pap | ssl.protocol = TLSv1.3 17:04:45 policy-pap | ssl.provider = null 17:04:45 policy-pap | ssl.secure.random.implementation = null 17:04:45 policy-pap | ssl.trustmanager.algorithm = PKIX 17:04:45 policy-pap | ssl.truststore.certificates = null 17:04:45 policy-pap | ssl.truststore.location = null 17:04:45 policy-pap | ssl.truststore.password = null 17:04:45 policy-pap | ssl.truststore.type = JKS 17:04:45 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:04:45 policy-pap | 17:04:45 policy-pap | [2024-09-29T17:02:41.929+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:45 policy-pap | [2024-09-29T17:02:41.929+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:45 policy-pap | [2024-09-29T17:02:41.929+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1727629361929 17:04:45 policy-pap | [2024-09-29T17:02:41.930+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 17:04:45 policy-pap | [2024-09-29T17:02:41.930+00:00|INFO|ServiceManager|main] Policy PAP starting topics 17:04:45 policy-pap | [2024-09-29T17:02:41.930+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=68a5f30d-402e-4bec-9c3d-83f6a398af54, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 17:04:45 policy-pap | [2024-09-29T17:02:41.930+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2cd2bf8c-cde2-4801-93ac-009d1b720a1d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 17:04:45 policy-pap | [2024-09-29T17:02:41.930+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=cda18a6e-533d-40a7-9265-958ee0f2f300, alive=false, publisher=null]]: starting 17:04:45 policy-pap | [2024-09-29T17:02:41.950+00:00|INFO|ProducerConfig|main] ProducerConfig values: 17:04:45 policy-pap | acks = -1 17:04:45 policy-pap | auto.include.jmx.reporter = true 17:04:45 policy-pap | batch.size = 16384 17:04:45 policy-pap | bootstrap.servers = [kafka:9092] 17:04:45 policy-pap | buffer.memory = 33554432 17:04:45 policy-pap | client.dns.lookup = use_all_dns_ips 17:04:45 policy-pap | client.id = producer-1 17:04:45 policy-pap | compression.type = none 17:04:45 policy-pap | connections.max.idle.ms = 540000 17:04:45 policy-pap | delivery.timeout.ms = 120000 17:04:45 policy-pap | enable.idempotence = true 17:04:45 policy-pap | interceptor.classes = [] 17:04:45 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:45 policy-pap | linger.ms = 0 17:04:45 policy-pap | max.block.ms = 60000 17:04:45 policy-pap | max.in.flight.requests.per.connection = 5 17:04:45 policy-pap | max.request.size = 1048576 17:04:45 policy-pap | metadata.max.age.ms = 300000 17:04:45 policy-pap | metadata.max.idle.ms = 300000 17:04:45 policy-pap | metric.reporters = [] 17:04:45 policy-pap | metrics.num.samples = 2 17:04:45 policy-pap | metrics.recording.level = INFO 17:04:45 policy-pap | metrics.sample.window.ms = 30000 17:04:45 policy-pap | partitioner.adaptive.partitioning.enable = true 17:04:45 policy-pap | partitioner.availability.timeout.ms = 0 17:04:45 policy-pap | partitioner.class = null 17:04:45 policy-pap | partitioner.ignore.keys = false 17:04:45 policy-pap | receive.buffer.bytes = 32768 17:04:45 policy-pap | reconnect.backoff.max.ms = 1000 17:04:45 policy-pap | reconnect.backoff.ms = 50 17:04:45 policy-pap | request.timeout.ms = 30000 17:04:45 policy-pap | retries = 2147483647 17:04:45 policy-pap | retry.backoff.ms = 100 17:04:45 policy-pap | sasl.client.callback.handler.class = null 17:04:45 policy-pap | sasl.jaas.config = null 17:04:45 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:45 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:04:45 policy-pap | sasl.kerberos.service.name = null 17:04:45 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:45 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:45 policy-pap | sasl.login.callback.handler.class = null 17:04:45 policy-pap | sasl.login.class = null 17:04:45 policy-pap | sasl.login.connect.timeout.ms = null 17:04:45 policy-pap | sasl.login.read.timeout.ms = null 17:04:45 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:04:45 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:04:45 policy-pap | sasl.login.refresh.window.factor = 0.8 17:04:45 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:04:45 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:04:45 policy-pap | sasl.login.retry.backoff.ms = 100 17:04:45 policy-pap | sasl.mechanism = GSSAPI 17:04:45 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:04:45 policy-pap | sasl.oauthbearer.expected.audience = null 17:04:45 policy-pap | sasl.oauthbearer.expected.issuer = null 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:04:45 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:04:45 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:04:45 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:04:45 policy-pap | security.protocol = PLAINTEXT 17:04:45 policy-pap | security.providers = null 17:04:45 policy-pap | send.buffer.bytes = 131072 17:04:45 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:04:45 policy-pap | socket.connection.setup.timeout.ms = 10000 17:04:45 policy-pap | ssl.cipher.suites = null 17:04:45 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:45 policy-pap | ssl.endpoint.identification.algorithm = https 17:04:45 policy-pap | ssl.engine.factory.class = null 17:04:45 policy-pap | ssl.key.password = null 17:04:45 policy-pap | ssl.keymanager.algorithm = SunX509 17:04:45 policy-pap | ssl.keystore.certificate.chain = null 17:04:45 policy-pap | ssl.keystore.key = null 17:04:45 policy-pap | ssl.keystore.location = null 17:04:45 policy-pap | ssl.keystore.password = null 17:04:45 policy-pap | ssl.keystore.type = JKS 17:04:45 policy-pap | ssl.protocol = TLSv1.3 17:04:45 policy-pap | ssl.provider = null 17:04:45 policy-pap | ssl.secure.random.implementation = null 17:04:45 policy-pap | ssl.trustmanager.algorithm = PKIX 17:04:45 policy-pap | ssl.truststore.certificates = null 17:04:45 policy-pap | ssl.truststore.location = null 17:04:45 policy-pap | ssl.truststore.password = null 17:04:45 policy-pap | ssl.truststore.type = JKS 17:04:45 policy-pap | transaction.timeout.ms = 60000 17:04:45 policy-pap | transactional.id = null 17:04:45 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:45 policy-pap | 17:04:45 policy-pap | [2024-09-29T17:02:41.966+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 17:04:45 policy-pap | [2024-09-29T17:02:41.985+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:45 policy-pap | [2024-09-29T17:02:41.985+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:45 policy-pap | [2024-09-29T17:02:41.985+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1727629361985 17:04:45 policy-pap | [2024-09-29T17:02:41.986+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=cda18a6e-533d-40a7-9265-958ee0f2f300, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 17:04:45 policy-pap | [2024-09-29T17:02:41.986+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=bc8a1032-396b-48e5-b283-d04f8a133558, alive=false, publisher=null]]: starting 17:04:45 policy-pap | [2024-09-29T17:02:41.996+00:00|INFO|ProducerConfig|main] ProducerConfig values: 17:04:45 policy-pap | acks = -1 17:04:45 policy-pap | auto.include.jmx.reporter = true 17:04:45 policy-pap | batch.size = 16384 17:04:45 policy-pap | bootstrap.servers = [kafka:9092] 17:04:45 policy-pap | buffer.memory = 33554432 17:04:45 policy-pap | client.dns.lookup = use_all_dns_ips 17:04:45 policy-pap | client.id = producer-2 17:04:45 policy-pap | compression.type = none 17:04:45 policy-pap | connections.max.idle.ms = 540000 17:04:45 policy-pap | delivery.timeout.ms = 120000 17:04:45 policy-pap | enable.idempotence = true 17:04:45 policy-pap | interceptor.classes = [] 17:04:45 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:45 policy-pap | linger.ms = 0 17:04:45 policy-pap | max.block.ms = 60000 17:04:45 policy-pap | max.in.flight.requests.per.connection = 5 17:04:45 policy-pap | max.request.size = 1048576 17:04:45 policy-pap | metadata.max.age.ms = 300000 17:04:45 policy-pap | metadata.max.idle.ms = 300000 17:04:45 policy-pap | metric.reporters = [] 17:04:45 policy-pap | metrics.num.samples = 2 17:04:45 policy-pap | metrics.recording.level = INFO 17:04:45 policy-pap | metrics.sample.window.ms = 30000 17:04:45 policy-pap | partitioner.adaptive.partitioning.enable = true 17:04:45 policy-pap | partitioner.availability.timeout.ms = 0 17:04:45 policy-pap | partitioner.class = null 17:04:45 policy-pap | partitioner.ignore.keys = false 17:04:45 policy-pap | receive.buffer.bytes = 32768 17:04:45 policy-pap | reconnect.backoff.max.ms = 1000 17:04:45 policy-pap | reconnect.backoff.ms = 50 17:04:45 policy-pap | request.timeout.ms = 30000 17:04:45 policy-pap | retries = 2147483647 17:04:45 policy-pap | retry.backoff.ms = 100 17:04:45 policy-pap | sasl.client.callback.handler.class = null 17:04:45 policy-pap | sasl.jaas.config = null 17:04:45 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:04:45 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:04:45 policy-pap | sasl.kerberos.service.name = null 17:04:45 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:04:45 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:04:45 policy-pap | sasl.login.callback.handler.class = null 17:04:45 policy-pap | sasl.login.class = null 17:04:45 policy-pap | sasl.login.connect.timeout.ms = null 17:04:45 policy-pap | sasl.login.read.timeout.ms = null 17:04:45 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:04:45 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:04:45 policy-pap | sasl.login.refresh.window.factor = 0.8 17:04:45 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:04:45 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:04:45 policy-pap | sasl.login.retry.backoff.ms = 100 17:04:45 policy-pap | sasl.mechanism = GSSAPI 17:04:45 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:04:45 policy-pap | sasl.oauthbearer.expected.audience = null 17:04:45 policy-pap | sasl.oauthbearer.expected.issuer = null 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:04:45 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:04:45 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:04:45 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:04:45 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:04:45 policy-pap | security.protocol = PLAINTEXT 17:04:45 policy-pap | security.providers = null 17:04:45 policy-pap | send.buffer.bytes = 131072 17:04:45 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:04:45 policy-pap | socket.connection.setup.timeout.ms = 10000 17:04:45 policy-pap | ssl.cipher.suites = null 17:04:45 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:04:45 policy-pap | ssl.endpoint.identification.algorithm = https 17:04:45 policy-pap | ssl.engine.factory.class = null 17:04:45 policy-pap | ssl.key.password = null 17:04:45 policy-pap | ssl.keymanager.algorithm = SunX509 17:04:45 policy-pap | ssl.keystore.certificate.chain = null 17:04:45 policy-pap | ssl.keystore.key = null 17:04:45 policy-pap | ssl.keystore.location = null 17:04:45 policy-pap | ssl.keystore.password = null 17:04:45 policy-pap | ssl.keystore.type = JKS 17:04:45 policy-pap | ssl.protocol = TLSv1.3 17:04:45 policy-pap | ssl.provider = null 17:04:45 policy-pap | ssl.secure.random.implementation = null 17:04:45 policy-pap | ssl.trustmanager.algorithm = PKIX 17:04:45 policy-pap | ssl.truststore.certificates = null 17:04:45 policy-pap | ssl.truststore.location = null 17:04:45 policy-pap | ssl.truststore.password = null 17:04:45 policy-pap | ssl.truststore.type = JKS 17:04:45 policy-pap | transaction.timeout.ms = 60000 17:04:45 policy-pap | transactional.id = null 17:04:45 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:04:45 policy-pap | 17:04:45 policy-pap | [2024-09-29T17:02:41.997+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 17:04:45 policy-pap | [2024-09-29T17:02:42.001+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:04:45 policy-pap | [2024-09-29T17:02:42.001+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:04:45 policy-pap | [2024-09-29T17:02:42.001+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1727629362001 17:04:45 policy-pap | [2024-09-29T17:02:42.002+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=bc8a1032-396b-48e5-b283-d04f8a133558, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 17:04:45 policy-pap | [2024-09-29T17:02:42.002+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 17:04:45 policy-pap | [2024-09-29T17:02:42.002+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 17:04:45 policy-pap | [2024-09-29T17:02:42.005+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 17:04:45 policy-pap | [2024-09-29T17:02:42.006+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 17:04:45 policy-pap | [2024-09-29T17:02:42.008+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 17:04:45 policy-pap | [2024-09-29T17:02:42.009+00:00|INFO|TimerManager|Thread-9] timer manager update started 17:04:45 policy-pap | [2024-09-29T17:02:42.009+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 17:04:45 policy-pap | [2024-09-29T17:02:42.009+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 17:04:45 policy-pap | [2024-09-29T17:02:42.010+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 17:04:45 policy-pap | [2024-09-29T17:02:42.011+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 17:04:45 policy-pap | [2024-09-29T17:02:42.014+00:00|INFO|ServiceManager|main] Policy PAP started 17:04:45 policy-pap | [2024-09-29T17:02:42.016+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.55 seconds (process running for 11.177) 17:04:45 policy-pap | [2024-09-29T17:02:42.421+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: o98Zehj3SPmhzfdRi49uhg 17:04:45 policy-pap | [2024-09-29T17:02:42.422+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 17:04:45 policy-pap | [2024-09-29T17:02:42.422+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Cluster ID: o98Zehj3SPmhzfdRi49uhg 17:04:45 policy-pap | [2024-09-29T17:02:42.425+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: o98Zehj3SPmhzfdRi49uhg 17:04:45 policy-pap | [2024-09-29T17:02:42.471+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:42.471+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: o98Zehj3SPmhzfdRi49uhg 17:04:45 policy-pap | [2024-09-29T17:02:42.536+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 17:04:45 policy-pap | [2024-09-29T17:02:42.537+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 17:04:45 policy-pap | [2024-09-29T17:02:42.557+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:42.596+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:42.669+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:42.706+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:42.778+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:42.812+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:42.885+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:42.922+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:42.996+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:43.037+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:43.119+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:43.144+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:43.231+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:43.260+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:43.336+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:43.363+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:43.443+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:04:45 policy-pap | [2024-09-29T17:02:43.478+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 17:04:45 policy-pap | [2024-09-29T17:02:43.490+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 17:04:45 policy-pap | [2024-09-29T17:02:43.545+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-d6c043eb-ab29-40d1-9783-0bdadc82bc9d 17:04:45 policy-pap | [2024-09-29T17:02:43.545+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:04:45 policy-pap | [2024-09-29T17:02:43.545+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 17:04:45 policy-pap | [2024-09-29T17:02:43.558+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 17:04:45 policy-pap | [2024-09-29T17:02:43.561+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] (Re-)joining group 17:04:45 policy-pap | [2024-09-29T17:02:43.571+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Request joining group due to: need to re-join with the given member-id: consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3-3ceef4f0-12a3-4015-a16f-5a374933d47e 17:04:45 policy-pap | [2024-09-29T17:02:43.571+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:04:45 policy-pap | [2024-09-29T17:02:43.571+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] (Re-)joining group 17:04:45 policy-pap | [2024-09-29T17:02:46.572+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-d6c043eb-ab29-40d1-9783-0bdadc82bc9d', protocol='range'} 17:04:45 policy-pap | [2024-09-29T17:02:46.575+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Successfully joined group with generation Generation{generationId=1, memberId='consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3-3ceef4f0-12a3-4015-a16f-5a374933d47e', protocol='range'} 17:04:45 policy-pap | [2024-09-29T17:02:46.583+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-d6c043eb-ab29-40d1-9783-0bdadc82bc9d=Assignment(partitions=[policy-pdp-pap-0])} 17:04:45 policy-pap | [2024-09-29T17:02:46.583+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Finished assignment for group at generation 1: {consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3-3ceef4f0-12a3-4015-a16f-5a374933d47e=Assignment(partitions=[policy-pdp-pap-0])} 17:04:45 policy-pap | [2024-09-29T17:02:46.605+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-d6c043eb-ab29-40d1-9783-0bdadc82bc9d', protocol='range'} 17:04:45 policy-pap | [2024-09-29T17:02:46.605+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Successfully synced group in generation Generation{generationId=1, memberId='consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3-3ceef4f0-12a3-4015-a16f-5a374933d47e', protocol='range'} 17:04:45 policy-pap | [2024-09-29T17:02:46.606+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 17:04:45 policy-pap | [2024-09-29T17:02:46.606+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 17:04:45 policy-pap | [2024-09-29T17:02:46.611+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Adding newly assigned partitions: policy-pdp-pap-0 17:04:45 policy-pap | [2024-09-29T17:02:46.611+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 17:04:45 policy-pap | [2024-09-29T17:02:46.630+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 17:04:45 policy-pap | [2024-09-29T17:02:46.633+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Found no committed offset for partition policy-pdp-pap-0 17:04:45 policy-pap | [2024-09-29T17:02:46.653+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 17:04:45 policy-pap | [2024-09-29T17:02:46.654+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2cd2bf8c-cde2-4801-93ac-009d1b720a1d-3, groupId=2cd2bf8c-cde2-4801-93ac-009d1b720a1d] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 17:04:45 policy-pap | [2024-09-29T17:03:04.051+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 17:04:45 policy-pap | [] 17:04:45 policy-pap | [2024-09-29T17:03:04.052+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:45 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d9828564-9921-47fa-a0d3-ffea5784e360","timestampMs":1727629383993,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup"} 17:04:45 policy-pap | [2024-09-29T17:03:04.052+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:45 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d9828564-9921-47fa-a0d3-ffea5784e360","timestampMs":1727629383993,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup"} 17:04:45 policy-pap | [2024-09-29T17:03:04.059+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 17:04:45 policy-pap | [2024-09-29T17:03:04.148+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate starting 17:04:45 policy-pap | [2024-09-29T17:03:04.148+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate starting listener 17:04:45 policy-pap | [2024-09-29T17:03:04.149+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate starting timer 17:04:45 policy-pap | [2024-09-29T17:03:04.149+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=b3e44c7c-f3cb-4a8d-826d-53df0b91c8af, expireMs=1727629414149] 17:04:45 policy-pap | [2024-09-29T17:03:04.151+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate starting enqueue 17:04:45 policy-pap | [2024-09-29T17:03:04.151+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=b3e44c7c-f3cb-4a8d-826d-53df0b91c8af, expireMs=1727629414149] 17:04:45 policy-pap | [2024-09-29T17:03:04.152+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate started 17:04:45 policy-pap | [2024-09-29T17:03:04.157+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 17:04:45 policy-pap | {"source":"pap-57c8d64d-d2db-46e1-966f-5ce1bae398d1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"b3e44c7c-f3cb-4a8d-826d-53df0b91c8af","timestampMs":1727629384127,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-pap | [2024-09-29T17:03:04.190+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:45 policy-pap | {"source":"pap-57c8d64d-d2db-46e1-966f-5ce1bae398d1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"b3e44c7c-f3cb-4a8d-826d-53df0b91c8af","timestampMs":1727629384127,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-pap | [2024-09-29T17:03:04.191+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 17:04:45 policy-pap | [2024-09-29T17:03:04.193+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:45 policy-pap | {"source":"pap-57c8d64d-d2db-46e1-966f-5ce1bae398d1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"b3e44c7c-f3cb-4a8d-826d-53df0b91c8af","timestampMs":1727629384127,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-pap | [2024-09-29T17:03:04.193+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 17:04:45 policy-pap | [2024-09-29T17:03:04.221+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:45 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"eb9f646b-eaeb-4593-a057-2be0b938fd17","timestampMs":1727629384201,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup"} 17:04:45 policy-pap | [2024-09-29T17:03:04.221+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:45 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"eb9f646b-eaeb-4593-a057-2be0b938fd17","timestampMs":1727629384201,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup"} 17:04:45 policy-pap | [2024-09-29T17:03:04.222+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 17:04:45 policy-pap | [2024-09-29T17:03:04.222+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:45 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"b3e44c7c-f3cb-4a8d-826d-53df0b91c8af","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"78c9ebdf-5657-4c85-876f-d0b4a6324cf3","timestampMs":1727629384204,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-pap | [2024-09-29T17:03:04.240+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:45 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"b3e44c7c-f3cb-4a8d-826d-53df0b91c8af","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"78c9ebdf-5657-4c85-876f-d0b4a6324cf3","timestampMs":1727629384204,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-pap | [2024-09-29T17:03:04.240+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate stopping 17:04:45 policy-pap | [2024-09-29T17:03:04.241+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id b3e44c7c-f3cb-4a8d-826d-53df0b91c8af 17:04:45 policy-pap | [2024-09-29T17:03:04.241+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate stopping enqueue 17:04:45 policy-pap | [2024-09-29T17:03:04.241+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate stopping timer 17:04:45 policy-pap | [2024-09-29T17:03:04.241+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=b3e44c7c-f3cb-4a8d-826d-53df0b91c8af, expireMs=1727629414149] 17:04:45 policy-pap | [2024-09-29T17:03:04.242+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate stopping listener 17:04:45 policy-pap | [2024-09-29T17:03:04.242+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate stopped 17:04:45 policy-pap | [2024-09-29T17:03:04.247+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate successful 17:04:45 policy-pap | [2024-09-29T17:03:04.247+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 start publishing next request 17:04:45 policy-pap | [2024-09-29T17:03:04.248+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpStateChange starting 17:04:45 policy-pap | [2024-09-29T17:03:04.248+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpStateChange starting listener 17:04:45 policy-pap | [2024-09-29T17:03:04.248+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpStateChange starting timer 17:04:45 policy-pap | [2024-09-29T17:03:04.248+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=2215677e-d7a0-49a3-b961-0181a0078062, expireMs=1727629414248] 17:04:45 policy-pap | [2024-09-29T17:03:04.249+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpStateChange starting enqueue 17:04:45 policy-pap | [2024-09-29T17:03:04.249+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=2215677e-d7a0-49a3-b961-0181a0078062, expireMs=1727629414248] 17:04:45 policy-pap | [2024-09-29T17:03:04.249+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpStateChange started 17:04:45 policy-pap | [2024-09-29T17:03:04.250+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 17:04:45 policy-pap | {"source":"pap-57c8d64d-d2db-46e1-966f-5ce1bae398d1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"2215677e-d7a0-49a3-b961-0181a0078062","timestampMs":1727629384128,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-pap | [2024-09-29T17:03:04.286+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:45 policy-pap | {"source":"pap-57c8d64d-d2db-46e1-966f-5ce1bae398d1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"2215677e-d7a0-49a3-b961-0181a0078062","timestampMs":1727629384128,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-pap | [2024-09-29T17:03:04.287+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 17:04:45 policy-pap | [2024-09-29T17:03:04.297+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:45 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"2215677e-d7a0-49a3-b961-0181a0078062","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"291487ea-7cae-4e80-b23a-3c9bb68e4950","timestampMs":1727629384285,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-pap | [2024-09-29T17:03:04.298+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 2215677e-d7a0-49a3-b961-0181a0078062 17:04:45 policy-pap | [2024-09-29T17:03:04.350+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:45 policy-pap | {"source":"pap-57c8d64d-d2db-46e1-966f-5ce1bae398d1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"2215677e-d7a0-49a3-b961-0181a0078062","timestampMs":1727629384128,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-pap | [2024-09-29T17:03:04.351+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 17:04:45 policy-pap | [2024-09-29T17:03:04.354+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:45 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"2215677e-d7a0-49a3-b961-0181a0078062","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"291487ea-7cae-4e80-b23a-3c9bb68e4950","timestampMs":1727629384285,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-pap | [2024-09-29T17:03:04.355+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpStateChange stopping 17:04:45 policy-pap | [2024-09-29T17:03:04.355+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpStateChange stopping enqueue 17:04:45 policy-pap | [2024-09-29T17:03:04.355+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpStateChange stopping timer 17:04:45 policy-pap | [2024-09-29T17:03:04.355+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=2215677e-d7a0-49a3-b961-0181a0078062, expireMs=1727629414248] 17:04:45 policy-pap | [2024-09-29T17:03:04.355+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpStateChange stopping listener 17:04:45 policy-pap | [2024-09-29T17:03:04.355+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpStateChange stopped 17:04:45 policy-pap | [2024-09-29T17:03:04.355+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpStateChange successful 17:04:45 policy-pap | [2024-09-29T17:03:04.355+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 start publishing next request 17:04:45 policy-pap | [2024-09-29T17:03:04.355+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate starting 17:04:45 policy-pap | [2024-09-29T17:03:04.355+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate starting listener 17:04:45 policy-pap | [2024-09-29T17:03:04.355+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate starting timer 17:04:45 policy-pap | [2024-09-29T17:03:04.355+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=5899912b-3d11-43b6-a357-a6d8cc275175, expireMs=1727629414355] 17:04:45 policy-pap | [2024-09-29T17:03:04.355+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate starting enqueue 17:04:45 policy-pap | [2024-09-29T17:03:04.355+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate started 17:04:45 policy-pap | [2024-09-29T17:03:04.355+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 17:04:45 policy-pap | {"source":"pap-57c8d64d-d2db-46e1-966f-5ce1bae398d1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"5899912b-3d11-43b6-a357-a6d8cc275175","timestampMs":1727629384339,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-pap | [2024-09-29T17:03:04.374+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:45 policy-pap | {"source":"pap-57c8d64d-d2db-46e1-966f-5ce1bae398d1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"5899912b-3d11-43b6-a357-a6d8cc275175","timestampMs":1727629384339,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-pap | [2024-09-29T17:03:04.374+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 17:04:45 policy-pap | [2024-09-29T17:03:04.381+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:04:45 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"5899912b-3d11-43b6-a357-a6d8cc275175","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"86c2bf53-fb39-491c-a8d8-b8bc859fd94e","timestampMs":1727629384366,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-pap | [2024-09-29T17:03:04.381+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:45 policy-pap | {"source":"pap-57c8d64d-d2db-46e1-966f-5ce1bae398d1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"5899912b-3d11-43b6-a357-a6d8cc275175","timestampMs":1727629384339,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-pap | [2024-09-29T17:03:04.381+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 17:04:45 policy-pap | [2024-09-29T17:03:04.381+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 5899912b-3d11-43b6-a357-a6d8cc275175 17:04:45 policy-pap | [2024-09-29T17:03:04.384+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:04:45 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"5899912b-3d11-43b6-a357-a6d8cc275175","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"86c2bf53-fb39-491c-a8d8-b8bc859fd94e","timestampMs":1727629384366,"name":"apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:04:45 policy-pap | [2024-09-29T17:03:04.384+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate stopping 17:04:45 policy-pap | [2024-09-29T17:03:04.384+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate stopping enqueue 17:04:45 policy-pap | [2024-09-29T17:03:04.384+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate stopping timer 17:04:45 policy-pap | [2024-09-29T17:03:04.384+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=5899912b-3d11-43b6-a357-a6d8cc275175, expireMs=1727629414355] 17:04:45 policy-pap | [2024-09-29T17:03:04.384+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate stopping listener 17:04:45 policy-pap | [2024-09-29T17:03:04.384+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate stopped 17:04:45 policy-pap | [2024-09-29T17:03:04.388+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 PdpUpdate successful 17:04:45 policy-pap | [2024-09-29T17:03:04.388+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-0d8721c1-2bf1-41c7-8d22-ac8085ad31b9 has no more requests 17:04:45 policy-pap | [2024-09-29T17:03:15.292+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 17:04:45 policy-pap | [2024-09-29T17:03:15.292+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 17:04:45 policy-pap | [2024-09-29T17:03:15.294+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 2 ms 17:04:45 policy-pap | [2024-09-29T17:03:34.150+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=b3e44c7c-f3cb-4a8d-826d-53df0b91c8af, expireMs=1727629414149] 17:04:45 policy-pap | [2024-09-29T17:03:34.248+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=2215677e-d7a0-49a3-b961-0181a0078062, expireMs=1727629414248] 17:04:45 policy-pap | [2024-09-29T17:03:35.728+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. 17:04:45 policy-pap | [2024-09-29T17:03:35.778+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 17:04:45 policy-pap | [2024-09-29T17:03:35.788+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 17:04:45 policy-pap | [2024-09-29T17:03:35.789+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 17:04:45 policy-pap | [2024-09-29T17:03:36.209+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup 17:04:45 policy-pap | [2024-09-29T17:03:36.747+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup 17:04:45 policy-pap | [2024-09-29T17:03:36.748+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup 17:04:45 policy-pap | [2024-09-29T17:03:37.312+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup 17:04:45 policy-pap | [2024-09-29T17:03:37.493+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy onap.restart.tca 1.0.0 17:04:45 policy-pap | [2024-09-29T17:03:37.597+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 17:04:45 policy-pap | [2024-09-29T17:03:37.597+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group testGroup 17:04:45 policy-pap | [2024-09-29T17:03:37.598+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group testGroup 17:04:45 policy-pap | [2024-09-29T17:03:37.611+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-09-29T17:03:37Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-09-29T17:03:37Z, user=policyadmin)] 17:04:45 policy-pap | [2024-09-29T17:03:38.263+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 17:04:45 policy-pap | [2024-09-29T17:03:38.264+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 17:04:45 policy-pap | [2024-09-29T17:03:38.264+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 17:04:45 policy-pap | [2024-09-29T17:03:38.264+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 17:04:45 policy-pap | [2024-09-29T17:03:38.265+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 17:04:45 policy-pap | [2024-09-29T17:03:38.277+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-09-29T17:03:38Z, user=policyadmin)] 17:04:45 policy-pap | [2024-09-29T17:03:38.636+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group defaultGroup 17:04:45 policy-pap | [2024-09-29T17:03:38.636+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup 17:04:45 policy-pap | [2024-09-29T17:03:38.636+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 17:04:45 policy-pap | [2024-09-29T17:03:38.636+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 17:04:45 policy-pap | [2024-09-29T17:03:38.636+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup 17:04:45 policy-pap | [2024-09-29T17:03:38.637+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup 17:04:45 policy-pap | [2024-09-29T17:03:38.768+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-09-29T17:03:38Z, user=policyadmin)] 17:04:45 policy-pap | [2024-09-29T17:03:39.330+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup 17:04:45 policy-pap | [2024-09-29T17:03:39.332+00:00|INFO|SessionData|http-nio-6969-exec-10] deleting DB group testGroup 17:04:45 policy-pap | [2024-09-29T17:04:42.011+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 17:04:45 =================================== 17:04:45 ======== Logs from prometheus ======== 17:04:45 prometheus | ts=2024-09-29T17:01:59.173Z caller=main.go:601 level=info msg="No time or size retention was set so using the default time retention" duration=15d 17:04:45 prometheus | ts=2024-09-29T17:01:59.173Z caller=main.go:645 level=info msg="Starting Prometheus Server" mode=server version="(version=2.54.1, branch=HEAD, revision=e6cfa720fbe6280153fab13090a483dbd40bece3)" 17:04:45 prometheus | ts=2024-09-29T17:01:59.173Z caller=main.go:650 level=info build_context="(go=go1.22.6, platform=linux/amd64, user=root@812ffd741951, date=20240827-10:56:41, tags=netgo,builtinassets,stringlabels)" 17:04:45 prometheus | ts=2024-09-29T17:01:59.173Z caller=main.go:651 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 17:04:45 prometheus | ts=2024-09-29T17:01:59.173Z caller=main.go:652 level=info fd_limits="(soft=1048576, hard=1048576)" 17:04:45 prometheus | ts=2024-09-29T17:01:59.173Z caller=main.go:653 level=info vm_limits="(soft=unlimited, hard=unlimited)" 17:04:45 prometheus | ts=2024-09-29T17:01:59.179Z caller=web.go:571 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 17:04:45 prometheus | ts=2024-09-29T17:01:59.180Z caller=main.go:1160 level=info msg="Starting TSDB ..." 17:04:45 prometheus | ts=2024-09-29T17:01:59.183Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 17:04:45 prometheus | ts=2024-09-29T17:01:59.183Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 17:04:45 prometheus | ts=2024-09-29T17:01:59.189Z caller=head.go:626 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 17:04:45 prometheus | ts=2024-09-29T17:01:59.189Z caller=head.go:713 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.72µs 17:04:45 prometheus | ts=2024-09-29T17:01:59.189Z caller=head.go:721 level=info component=tsdb msg="Replaying WAL, this may take a while" 17:04:45 prometheus | ts=2024-09-29T17:01:59.190Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 17:04:45 prometheus | ts=2024-09-29T17:01:59.190Z caller=head.go:830 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=35.7µs wal_replay_duration=796.886µs wbl_replay_duration=301ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=2.72µs total_replay_duration=863.397µs 17:04:45 prometheus | ts=2024-09-29T17:01:59.196Z caller=main.go:1181 level=info fs_type=EXT4_SUPER_MAGIC 17:04:45 prometheus | ts=2024-09-29T17:01:59.196Z caller=main.go:1184 level=info msg="TSDB started" 17:04:45 prometheus | ts=2024-09-29T17:01:59.196Z caller=main.go:1367 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 17:04:45 prometheus | ts=2024-09-29T17:01:59.197Z caller=main.go:1404 level=info msg="updated GOGC" old=100 new=75 17:04:45 prometheus | ts=2024-09-29T17:01:59.197Z caller=main.go:1415 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.050778ms db_storage=1.44µs remote_storage=3.28µs web_handler=650ns query_engine=5.49µs scrape=255.172µs scrape_sd=143.132µs notify=33.17µs notify_sd=12.35µs rules=2.25µs tracing=8.68µs 17:04:45 prometheus | ts=2024-09-29T17:01:59.197Z caller=main.go:1145 level=info msg="Server is ready to receive web requests." 17:04:45 prometheus | ts=2024-09-29T17:01:59.197Z caller=manager.go:164 level=info component="rule manager" msg="Starting rule manager..." 17:04:45 =================================== 17:04:45 ======== Logs from simulator ======== 17:04:45 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 17:04:45 simulator | overriding logback.xml 17:04:45 simulator | 2024-09-29 17:02:03,656 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 17:04:45 simulator | 2024-09-29 17:02:03,717 INFO org.onap.policy.models.simulators starting 17:04:45 simulator | 2024-09-29 17:02:03,718 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 17:04:45 simulator | 2024-09-29 17:02:03,947 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 17:04:45 simulator | 2024-09-29 17:02:03,950 INFO org.onap.policy.models.simulators starting A&AI simulator 17:04:45 simulator | 2024-09-29 17:02:04,105 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 17:04:45 simulator | 2024-09-29 17:02:04,119 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:45 simulator | 2024-09-29 17:02:04,123 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:45 simulator | 2024-09-29 17:02:04,128 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 17:04:45 simulator | 2024-09-29 17:02:04,224 INFO Session workerName=node0 17:04:45 simulator | 2024-09-29 17:02:04,999 INFO Using GSON for REST calls 17:04:45 simulator | 2024-09-29 17:02:05,148 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} 17:04:45 simulator | 2024-09-29 17:02:05,170 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 17:04:45 simulator | 2024-09-29 17:02:05,181 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @2003ms 17:04:45 simulator | 2024-09-29 17:02:05,181 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 3942 ms. 17:04:45 simulator | 2024-09-29 17:02:05,190 INFO org.onap.policy.models.simulators starting SDNC simulator 17:04:45 simulator | 2024-09-29 17:02:05,202 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 17:04:45 simulator | 2024-09-29 17:02:05,205 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:45 simulator | 2024-09-29 17:02:05,210 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:45 simulator | 2024-09-29 17:02:05,211 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 17:04:45 simulator | 2024-09-29 17:02:05,225 INFO Session workerName=node0 17:04:45 simulator | 2024-09-29 17:02:05,332 INFO Using GSON for REST calls 17:04:45 simulator | 2024-09-29 17:02:05,344 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} 17:04:45 simulator | 2024-09-29 17:02:05,346 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 17:04:45 simulator | 2024-09-29 17:02:05,346 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @2168ms 17:04:45 simulator | 2024-09-29 17:02:05,346 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4864 ms. 17:04:45 simulator | 2024-09-29 17:02:05,348 INFO org.onap.policy.models.simulators starting SO simulator 17:04:45 simulator | 2024-09-29 17:02:05,351 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 17:04:45 simulator | 2024-09-29 17:02:05,351 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:45 simulator | 2024-09-29 17:02:05,352 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:45 simulator | 2024-09-29 17:02:05,353 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 17:04:45 simulator | 2024-09-29 17:02:05,357 INFO Session workerName=node0 17:04:45 simulator | 2024-09-29 17:02:05,458 INFO Using GSON for REST calls 17:04:45 simulator | 2024-09-29 17:02:05,471 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} 17:04:45 simulator | 2024-09-29 17:02:05,473 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 17:04:45 simulator | 2024-09-29 17:02:05,473 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @2296ms 17:04:45 simulator | 2024-09-29 17:02:05,474 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4877 ms. 17:04:45 simulator | 2024-09-29 17:02:05,475 INFO org.onap.policy.models.simulators starting VFC simulator 17:04:45 simulator | 2024-09-29 17:02:05,479 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 17:04:45 simulator | 2024-09-29 17:02:05,479 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:45 simulator | 2024-09-29 17:02:05,481 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:04:45 simulator | 2024-09-29 17:02:05,481 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 17:04:45 simulator | 2024-09-29 17:02:05,483 INFO Session workerName=node0 17:04:45 simulator | 2024-09-29 17:02:05,525 INFO Using GSON for REST calls 17:04:45 simulator | 2024-09-29 17:02:05,534 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} 17:04:45 simulator | 2024-09-29 17:02:05,535 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 17:04:45 simulator | 2024-09-29 17:02:05,535 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @2357ms 17:04:45 simulator | 2024-09-29 17:02:05,535 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4946 ms. 17:04:45 simulator | 2024-09-29 17:02:05,536 INFO org.onap.policy.models.simulators started 17:04:45 =================================== 17:04:45 ======== Logs from zookeeper ======== 17:04:45 zookeeper | ===> User 17:04:45 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 17:04:45 zookeeper | ===> Configuring ... 17:04:45 zookeeper | ===> Running preflight checks ... 17:04:45 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 17:04:45 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 17:04:45 zookeeper | ===> Launching ... 17:04:45 zookeeper | ===> Launching zookeeper ... 17:04:45 zookeeper | [2024-09-29 17:02:05,047] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:45 zookeeper | [2024-09-29 17:02:05,049] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:45 zookeeper | [2024-09-29 17:02:05,049] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:45 zookeeper | [2024-09-29 17:02:05,049] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:45 zookeeper | [2024-09-29 17:02:05,049] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:45 zookeeper | [2024-09-29 17:02:05,051] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 17:04:45 zookeeper | [2024-09-29 17:02:05,051] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 17:04:45 zookeeper | [2024-09-29 17:02:05,051] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 17:04:45 zookeeper | [2024-09-29 17:02:05,051] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 17:04:45 zookeeper | [2024-09-29 17:02:05,052] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 17:04:45 zookeeper | [2024-09-29 17:02:05,052] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:45 zookeeper | [2024-09-29 17:02:05,052] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:45 zookeeper | [2024-09-29 17:02:05,052] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:45 zookeeper | [2024-09-29 17:02:05,052] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:45 zookeeper | [2024-09-29 17:02:05,052] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:04:45 zookeeper | [2024-09-29 17:02:05,053] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 17:04:45 zookeeper | [2024-09-29 17:02:05,063] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@75c072cb (org.apache.zookeeper.server.ServerMetrics) 17:04:45 zookeeper | [2024-09-29 17:02:05,066] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 17:04:45 zookeeper | [2024-09-29 17:02:05,066] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 17:04:45 zookeeper | [2024-09-29 17:02:05,068] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 17:04:45 zookeeper | [2024-09-29 17:02:05,076] INFO (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,076] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,077] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,077] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,077] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,077] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,077] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,077] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,077] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,077] INFO (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,078] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,078] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,078] INFO Server environment:java.version=17.0.12 (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,078] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,078] INFO Server environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,078] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/protobuf-java-3.23.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-raft-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-runtime-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/connect-json-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/netty-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-server-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-clients-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/scala-library-2.13.12.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-shell-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.110.Final.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.110.Final.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.110.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.12.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/trogdor-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-tools-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.7.1-ccs.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,078] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,078] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,078] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,078] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,078] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,079] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,079] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,079] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,079] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,079] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,079] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,079] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,079] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,079] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,079] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,079] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,079] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,079] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,079] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,080] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 17:04:45 zookeeper | [2024-09-29 17:02:05,081] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,081] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,088] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 17:04:45 zookeeper | [2024-09-29 17:02:05,088] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 17:04:45 zookeeper | [2024-09-29 17:02:05,088] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:04:45 zookeeper | [2024-09-29 17:02:05,089] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:04:45 zookeeper | [2024-09-29 17:02:05,089] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:04:45 zookeeper | [2024-09-29 17:02:05,089] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:04:45 zookeeper | [2024-09-29 17:02:05,089] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:04:45 zookeeper | [2024-09-29 17:02:05,089] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:04:45 zookeeper | [2024-09-29 17:02:05,091] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,091] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,091] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 17:04:45 zookeeper | [2024-09-29 17:02:05,091] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 17:04:45 zookeeper | [2024-09-29 17:02:05,091] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,134] INFO Logging initialized @542ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 17:04:45 zookeeper | [2024-09-29 17:02:05,229] WARN o.e.j.s.ServletContextHandler@f5958c9{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 17:04:45 zookeeper | [2024-09-29 17:02:05,229] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 17:04:45 zookeeper | [2024-09-29 17:02:05,247] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 17.0.12+7-LTS (org.eclipse.jetty.server.Server) 17:04:45 zookeeper | [2024-09-29 17:02:05,305] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 17:04:45 zookeeper | [2024-09-29 17:02:05,305] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 17:04:45 zookeeper | [2024-09-29 17:02:05,308] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 17:04:45 zookeeper | [2024-09-29 17:02:05,323] WARN ServletContext@o.e.j.s.ServletContextHandler@f5958c9{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 17:04:45 zookeeper | [2024-09-29 17:02:05,344] INFO Started o.e.j.s.ServletContextHandler@f5958c9{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 17:04:45 zookeeper | [2024-09-29 17:02:05,368] INFO Started ServerConnector@436813f3{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 17:04:45 zookeeper | [2024-09-29 17:02:05,368] INFO Started @786ms (org.eclipse.jetty.server.Server) 17:04:45 zookeeper | [2024-09-29 17:02:05,368] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,374] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 17:04:45 zookeeper | [2024-09-29 17:02:05,375] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 17:04:45 zookeeper | [2024-09-29 17:02:05,377] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 17:04:45 zookeeper | [2024-09-29 17:02:05,378] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 17:04:45 zookeeper | [2024-09-29 17:02:05,389] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 17:04:45 zookeeper | [2024-09-29 17:02:05,390] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 17:04:45 zookeeper | [2024-09-29 17:02:05,390] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 17:04:45 zookeeper | [2024-09-29 17:02:05,390] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 17:04:45 zookeeper | [2024-09-29 17:02:05,394] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 17:04:45 zookeeper | [2024-09-29 17:02:05,394] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 17:04:45 zookeeper | [2024-09-29 17:02:05,397] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 17:04:45 zookeeper | [2024-09-29 17:02:05,398] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 17:04:45 zookeeper | [2024-09-29 17:02:05,398] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:04:45 zookeeper | [2024-09-29 17:02:05,406] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 17:04:45 zookeeper | [2024-09-29 17:02:05,409] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 17:04:45 zookeeper | [2024-09-29 17:02:05,420] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 17:04:45 zookeeper | [2024-09-29 17:02:05,420] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 17:04:45 zookeeper | [2024-09-29 17:02:06,310] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 17:04:45 =================================== 17:04:45 Tearing down containers... 17:04:46 Container policy-csit Stopping 17:04:46 Container grafana Stopping 17:04:46 Container policy-apex-pdp Stopping 17:04:46 Container policy-csit Stopped 17:04:46 Container policy-csit Removing 17:04:46 Container policy-csit Removed 17:04:46 Container grafana Stopped 17:04:46 Container grafana Removing 17:04:46 Container grafana Removed 17:04:46 Container prometheus Stopping 17:04:46 Container prometheus Stopped 17:04:46 Container prometheus Removing 17:04:46 Container prometheus Removed 17:04:56 Container policy-apex-pdp Stopped 17:04:56 Container policy-apex-pdp Removing 17:04:56 Container policy-apex-pdp Removed 17:04:56 Container simulator Stopping 17:04:56 Container policy-pap Stopping 17:05:06 Container simulator Stopped 17:05:06 Container simulator Removing 17:05:06 Container simulator Removed 17:05:06 Container policy-pap Stopped 17:05:06 Container policy-pap Removing 17:05:06 Container policy-pap Removed 17:05:06 Container policy-api Stopping 17:05:06 Container kafka Stopping 17:05:07 Container kafka Stopped 17:05:07 Container kafka Removing 17:05:07 Container kafka Removed 17:05:07 Container zookeeper Stopping 17:05:08 Container zookeeper Stopped 17:05:08 Container zookeeper Removing 17:05:08 Container zookeeper Removed 17:05:17 Container policy-api Stopped 17:05:17 Container policy-api Removing 17:05:17 Container policy-api Removed 17:05:17 Container policy-db-migrator Stopping 17:05:17 Container policy-db-migrator Stopped 17:05:17 Container policy-db-migrator Removing 17:05:17 Container policy-db-migrator Removed 17:05:17 Container mariadb Stopping 17:05:17 Container mariadb Stopped 17:05:17 Container mariadb Removing 17:05:17 Container mariadb Removed 17:05:17 Network compose_default Removing 17:05:18 Network compose_default Removed 17:05:18 $ ssh-agent -k 17:05:18 unset SSH_AUTH_SOCK; 17:05:18 unset SSH_AGENT_PID; 17:05:18 echo Agent pid 2067 killed; 17:05:18 [ssh-agent] Stopped. 17:05:18 Robot results publisher started... 17:05:18 INFO: Checking test criticality is deprecated and will be dropped in a future release! 17:05:18 -Parsing output xml: 17:05:18 Done! 17:05:18 -Copying log files to build dir: 17:05:18 Done! 17:05:18 -Assigning results to build: 17:05:18 Done! 17:05:18 -Checking thresholds: 17:05:18 Done! 17:05:18 Done publishing Robot results. 17:05:18 [PostBuildScript] - [INFO] Executing post build scripts. 17:05:18 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins9281353350998257942.sh 17:05:18 ---> sysstat.sh 17:05:19 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins14625436657502872754.sh 17:05:19 ---> package-listing.sh 17:05:19 ++ facter osfamily 17:05:19 ++ tr '[:upper:]' '[:lower:]' 17:05:19 + OS_FAMILY=debian 17:05:19 + workspace=/w/workspace/policy-pap-newdelhi-project-csit-pap 17:05:19 + START_PACKAGES=/tmp/packages_start.txt 17:05:19 + END_PACKAGES=/tmp/packages_end.txt 17:05:19 + DIFF_PACKAGES=/tmp/packages_diff.txt 17:05:19 + PACKAGES=/tmp/packages_start.txt 17:05:19 + '[' /w/workspace/policy-pap-newdelhi-project-csit-pap ']' 17:05:19 + PACKAGES=/tmp/packages_end.txt 17:05:19 + case "${OS_FAMILY}" in 17:05:19 + dpkg -l 17:05:19 + grep '^ii' 17:05:19 + '[' -f /tmp/packages_start.txt ']' 17:05:19 + '[' -f /tmp/packages_end.txt ']' 17:05:19 + diff /tmp/packages_start.txt /tmp/packages_end.txt 17:05:19 + '[' /w/workspace/policy-pap-newdelhi-project-csit-pap ']' 17:05:19 + mkdir -p /w/workspace/policy-pap-newdelhi-project-csit-pap/archives/ 17:05:19 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-newdelhi-project-csit-pap/archives/ 17:05:19 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins18207706321002140184.sh 17:05:19 ---> capture-instance-metadata.sh 17:05:19 Setup pyenv: 17:05:19 system 17:05:19 3.8.13 17:05:19 3.9.13 17:05:19 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) 17:05:19 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-pcnE from file:/tmp/.os_lf_venv 17:05:20 lf-activate-venv(): INFO: Installing: lftools 17:05:28 lf-activate-venv(): INFO: Adding /tmp/venv-pcnE/bin to PATH 17:05:28 INFO: Running in OpenStack, capturing instance metadata 17:05:29 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins10391898948711323695.sh 17:05:29 provisioning config files... 17:05:29 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/config11365254545203600959tmp 17:05:29 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 17:05:29 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 17:05:29 [EnvInject] - Injecting environment variables from a build step. 17:05:29 [EnvInject] - Injecting as environment variables the properties content 17:05:29 SERVER_ID=logs 17:05:29 17:05:29 [EnvInject] - Variables injected successfully. 17:05:29 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins1781706619476890622.sh 17:05:29 ---> create-netrc.sh 17:05:29 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins5068272558587322932.sh 17:05:29 ---> python-tools-install.sh 17:05:29 Setup pyenv: 17:05:29 system 17:05:29 3.8.13 17:05:29 3.9.13 17:05:29 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) 17:05:29 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-pcnE from file:/tmp/.os_lf_venv 17:05:30 lf-activate-venv(): INFO: Installing: lftools 17:05:39 lf-activate-venv(): INFO: Adding /tmp/venv-pcnE/bin to PATH 17:05:39 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins15785657810781798736.sh 17:05:39 ---> sudo-logs.sh 17:05:39 Archiving 'sudo' log.. 17:05:40 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins5738995593353523246.sh 17:05:40 ---> job-cost.sh 17:05:40 Setup pyenv: 17:05:40 system 17:05:40 3.8.13 17:05:40 3.9.13 17:05:40 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) 17:05:40 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-pcnE from file:/tmp/.os_lf_venv 17:05:41 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 17:05:44 lf-activate-venv(): INFO: Adding /tmp/venv-pcnE/bin to PATH 17:05:44 INFO: No Stack... 17:05:45 INFO: Retrieving Pricing Info for: v3-standard-8 17:05:45 INFO: Archiving Costs 17:05:45 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash -l /tmp/jenkins11619162459842810380.sh 17:05:45 ---> logs-deploy.sh 17:05:45 Setup pyenv: 17:05:45 system 17:05:45 3.8.13 17:05:45 3.9.13 17:05:45 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) 17:05:45 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-pcnE from file:/tmp/.os_lf_venv 17:05:46 lf-activate-venv(): INFO: Installing: lftools 17:05:54 lf-activate-venv(): INFO: Adding /tmp/venv-pcnE/bin to PATH 17:05:54 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-newdelhi-project-csit-pap/134 17:05:54 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 17:05:55 Archives upload complete. 17:05:55 INFO: archiving logs to Nexus 17:05:56 ---> uname -a: 17:05:56 Linux prd-ubuntu1804-docker-8c-8g-43530 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 17:05:56 17:05:56 17:05:56 ---> lscpu: 17:05:56 Architecture: x86_64 17:05:56 CPU op-mode(s): 32-bit, 64-bit 17:05:56 Byte Order: Little Endian 17:05:56 CPU(s): 8 17:05:56 On-line CPU(s) list: 0-7 17:05:56 Thread(s) per core: 1 17:05:56 Core(s) per socket: 1 17:05:56 Socket(s): 8 17:05:56 NUMA node(s): 1 17:05:56 Vendor ID: AuthenticAMD 17:05:56 CPU family: 23 17:05:56 Model: 49 17:05:56 Model name: AMD EPYC-Rome Processor 17:05:56 Stepping: 0 17:05:56 CPU MHz: 2799.998 17:05:56 BogoMIPS: 5599.99 17:05:56 Virtualization: AMD-V 17:05:56 Hypervisor vendor: KVM 17:05:56 Virtualization type: full 17:05:56 L1d cache: 32K 17:05:56 L1i cache: 32K 17:05:56 L2 cache: 512K 17:05:56 L3 cache: 16384K 17:05:56 NUMA node0 CPU(s): 0-7 17:05:56 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 17:05:56 17:05:56 17:05:56 ---> nproc: 17:05:56 8 17:05:56 17:05:56 17:05:56 ---> df -h: 17:05:56 Filesystem Size Used Avail Use% Mounted on 17:05:56 udev 16G 0 16G 0% /dev 17:05:56 tmpfs 3.2G 708K 3.2G 1% /run 17:05:56 /dev/vda1 155G 14G 142G 9% / 17:05:56 tmpfs 16G 0 16G 0% /dev/shm 17:05:56 tmpfs 5.0M 0 5.0M 0% /run/lock 17:05:56 tmpfs 16G 0 16G 0% /sys/fs/cgroup 17:05:56 /dev/vda15 105M 4.4M 100M 5% /boot/efi 17:05:56 tmpfs 3.2G 0 3.2G 0% /run/user/1001 17:05:56 17:05:56 17:05:56 ---> free -m: 17:05:56 total used free shared buff/cache available 17:05:56 Mem: 32167 884 25062 0 6220 30827 17:05:56 Swap: 1023 0 1023 17:05:56 17:05:56 17:05:56 ---> ip addr: 17:05:56 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 17:05:56 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 17:05:56 inet 127.0.0.1/8 scope host lo 17:05:56 valid_lft forever preferred_lft forever 17:05:56 inet6 ::1/128 scope host 17:05:56 valid_lft forever preferred_lft forever 17:05:56 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 17:05:56 link/ether fa:16:3e:b6:f2:76 brd ff:ff:ff:ff:ff:ff 17:05:56 inet 10.30.107.60/23 brd 10.30.107.255 scope global dynamic ens3 17:05:56 valid_lft 86008sec preferred_lft 86008sec 17:05:56 inet6 fe80::f816:3eff:feb6:f276/64 scope link 17:05:56 valid_lft forever preferred_lft forever 17:05:56 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 17:05:56 link/ether 02:42:13:24:e4:00 brd ff:ff:ff:ff:ff:ff 17:05:56 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 17:05:56 valid_lft forever preferred_lft forever 17:05:56 inet6 fe80::42:13ff:fe24:e400/64 scope link 17:05:56 valid_lft forever preferred_lft forever 17:05:56 17:05:56 17:05:56 ---> sar -b -r -n DEV: 17:05:56 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-43530) 09/29/24 _x86_64_ (8 CPU) 17:05:56 17:05:56 16:59:25 LINUX RESTART (8 CPU) 17:05:56 17:05:56 17:00:03 tps rtps wtps bread/s bwrtn/s 17:05:56 17:01:01 331.77 37.49 294.28 1749.63 25792.79 17:05:56 17:02:01 414.38 22.03 392.35 2695.28 163238.13 17:05:56 17:03:01 300.52 9.22 291.30 385.95 34240.14 17:05:56 17:04:01 81.84 0.20 81.64 12.66 26638.95 17:05:56 17:05:01 27.90 0.02 27.88 3.73 18071.12 17:05:56 Average: 230.60 13.63 216.97 964.19 53781.89 17:05:56 17:05:56 17:00:03 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 17:05:56 17:01:01 30090080 31663720 2849140 8.65 70516 1814136 1430912 4.21 907940 1650544 154148 17:05:56 17:02:01 25868092 31431284 7071128 21.47 130984 5583192 4783196 14.07 1216432 5322912 1592 17:05:56 17:03:01 23386324 29440372 9552896 29.00 163388 5984840 9159316 26.95 3472132 5455324 80000 17:05:56 17:04:01 23545900 29592064 9393320 28.52 171960 5970768 9042276 26.60 3330220 5438344 224 17:05:56 17:05:01 23870264 29911472 9068956 27.53 172216 5970524 7369460 21.68 3030108 5430512 280 17:05:56 Average: 25352132 30407782 7587088 23.03 141813 5064692 6357032 18.70 2391366 4659527 47249 17:05:56 17:05:56 17:00:03 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 17:05:56 17:01:01 ens3 84.24 61.35 953.71 25.57 0.00 0.00 0.00 0.00 17:05:56 17:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:05:56 17:01:01 lo 1.45 1.45 0.16 0.16 0.00 0.00 0.00 0.00 17:05:56 17:02:01 ens3 1156.41 626.28 31396.39 54.34 0.00 0.00 0.00 0.00 17:05:56 17:02:01 vethdf8ffe4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:05:56 17:02:01 br-34897d594ba7 0.00 0.12 0.00 0.01 0.00 0.00 0.00 0.00 17:05:56 17:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:05:56 17:03:01 ens3 55.86 38.09 1177.25 5.15 0.00 0.00 0.00 0.00 17:05:56 17:03:01 vethdf8ffe4 11.36 11.10 1.49 1.51 0.00 0.00 0.00 0.00 17:05:56 17:03:01 br-34897d594ba7 1.07 0.82 0.09 0.39 0.00 0.00 0.00 0.00 17:05:56 17:03:01 vethef2172d 24.26 22.46 10.95 16.10 0.00 0.00 0.00 0.00 17:05:56 17:04:01 ens3 6.51 5.16 6.39 1.29 0.00 0.00 0.00 0.00 17:05:56 17:04:01 vethdf8ffe4 15.69 11.10 1.41 1.64 0.00 0.00 0.00 0.00 17:05:56 17:04:01 br-34897d594ba7 0.33 0.20 0.02 0.01 0.00 0.00 0.00 0.00 17:05:56 17:04:01 vethef2172d 21.94 17.74 6.82 23.82 0.00 0.00 0.00 0.00 17:05:56 17:05:01 ens3 14.65 14.40 8.92 15.04 0.00 0.00 0.00 0.00 17:05:56 17:05:01 vethdf8ffe4 13.65 9.32 1.04 1.31 0.00 0.00 0.00 0.00 17:05:56 17:05:01 br-34897d594ba7 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:05:56 17:05:01 vethef2172d 0.42 0.58 0.59 0.04 0.00 0.00 0.00 0.00 17:05:56 Average: ens3 264.73 149.64 6746.92 20.24 0.00 0.00 0.00 0.00 17:05:56 Average: vethdf8ffe4 8.20 6.34 0.79 0.90 0.00 0.00 0.00 0.00 17:05:56 Average: br-34897d594ba7 0.29 0.23 0.02 0.08 0.00 0.00 0.00 0.00 17:05:56 Average: vethef2172d 9.39 8.21 3.70 8.05 0.00 0.00 0.00 0.00 17:05:56 17:05:56 17:05:56 ---> sar -P ALL: 17:05:56 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-43530) 09/29/24 _x86_64_ (8 CPU) 17:05:56 17:05:56 16:59:25 LINUX RESTART (8 CPU) 17:05:56 17:05:56 17:00:03 CPU %user %nice %system %iowait %steal %idle 17:05:56 17:01:01 all 8.85 0.00 0.95 3.75 0.03 86.41 17:05:56 17:01:01 0 18.35 0.00 1.24 1.14 0.03 79.23 17:05:56 17:01:01 1 14.33 0.00 1.54 3.51 0.07 80.54 17:05:56 17:01:01 2 5.89 0.00 0.57 0.26 0.02 93.27 17:05:56 17:01:01 3 0.42 0.00 0.59 19.41 0.02 79.57 17:05:56 17:01:01 4 2.55 0.00 0.29 0.17 0.02 96.97 17:05:56 17:01:01 5 1.66 0.00 1.12 2.64 0.02 94.57 17:05:56 17:01:01 6 2.07 0.00 0.43 0.69 0.02 96.79 17:05:56 17:01:01 7 25.63 0.00 1.81 2.23 0.05 70.28 17:05:56 17:02:01 all 14.57 0.00 6.62 9.17 0.06 69.58 17:05:56 17:02:01 0 22.99 0.00 8.16 40.89 0.10 27.86 17:05:56 17:02:01 1 15.01 0.00 6.02 0.37 0.05 78.55 17:05:56 17:02:01 2 13.61 0.00 5.72 1.87 0.05 78.75 17:05:56 17:02:01 3 11.80 0.00 6.53 14.17 0.08 67.42 17:05:56 17:02:01 4 13.70 0.00 6.95 7.05 0.05 72.25 17:05:56 17:02:01 5 12.75 0.00 6.80 1.33 0.05 79.07 17:05:56 17:02:01 6 11.96 0.00 6.76 5.80 0.05 75.43 17:05:56 17:02:01 7 14.79 0.00 5.99 2.25 0.05 76.91 17:05:56 17:03:01 all 29.64 0.00 3.99 2.76 0.09 63.52 17:05:56 17:03:01 0 33.14 0.00 3.85 2.83 0.08 60.10 17:05:56 17:03:01 1 22.02 0.00 3.47 1.11 0.08 73.32 17:05:56 17:03:01 2 28.97 0.00 3.78 6.41 0.12 60.73 17:05:56 17:03:01 3 26.84 0.00 3.84 1.68 0.08 67.55 17:05:56 17:03:01 4 31.62 0.00 3.94 2.87 0.07 61.50 17:05:56 17:03:01 5 32.79 0.00 4.71 2.75 0.10 59.65 17:05:56 17:03:01 6 30.25 0.00 3.80 2.59 0.10 63.26 17:05:56 17:03:01 7 31.51 0.00 4.53 1.86 0.08 62.01 17:05:56 17:04:01 all 7.43 0.00 1.21 1.43 0.06 89.87 17:05:56 17:04:01 0 6.76 0.00 1.22 0.12 0.05 91.86 17:05:56 17:04:01 1 6.63 0.00 0.97 0.42 0.07 91.92 17:05:56 17:04:01 2 6.96 0.00 1.36 7.75 0.05 83.89 17:05:56 17:04:01 3 8.02 0.00 1.24 0.07 0.08 90.59 17:05:56 17:04:01 4 8.03 0.00 0.88 0.00 0.05 91.03 17:05:56 17:04:01 5 6.42 0.00 1.59 1.98 0.07 89.94 17:05:56 17:04:01 6 8.75 0.00 1.35 1.11 0.05 88.74 17:05:56 17:04:01 7 7.85 0.00 1.07 0.00 0.07 91.01 17:05:56 17:05:01 all 1.90 0.00 0.49 1.21 0.05 96.35 17:05:56 17:05:01 0 1.54 0.00 0.48 0.15 0.05 97.78 17:05:56 17:05:01 1 2.41 0.00 0.55 0.05 0.07 96.93 17:05:56 17:05:01 2 1.28 0.00 0.50 9.07 0.05 89.09 17:05:56 17:05:01 3 1.40 0.00 0.48 0.15 0.03 97.93 17:05:56 17:05:01 4 1.62 0.00 0.47 0.03 0.03 97.84 17:05:56 17:05:01 5 2.87 0.00 0.38 0.07 0.05 96.63 17:05:56 17:05:01 6 2.07 0.00 0.58 0.17 0.07 97.12 17:05:56 17:05:01 7 1.97 0.00 0.47 0.03 0.03 97.49 17:05:56 Average: all 12.48 0.00 2.65 3.65 0.06 81.15 17:05:56 Average: 0 16.51 0.00 2.98 8.97 0.06 71.47 17:05:56 Average: 1 12.05 0.00 2.51 1.07 0.07 84.29 17:05:56 Average: 2 11.36 0.00 2.39 5.11 0.06 81.09 17:05:56 Average: 3 9.74 0.00 2.54 6.99 0.06 80.67 17:05:56 Average: 4 11.54 0.00 2.51 2.03 0.04 83.88 17:05:56 Average: 5 11.33 0.00 2.92 1.75 0.06 83.94 17:05:56 Average: 6 11.04 0.00 2.59 2.07 0.06 84.24 17:05:56 Average: 7 16.28 0.00 2.78 1.27 0.06 79.62 17:05:56 17:05:56 17:05:56